Test Report: KVM_Linux_crio 17761

                    
                      4145ffc8c3ff629bd64b588eb0db70699e9f5232:2023-12-12:32257
                    
                

Test fail (28/307)

Order failed test Duration
35 TestAddons/parallel/Ingress 160.99
48 TestAddons/StoppedEnableDisable 155.6
117 TestFunctional/parallel/ImageCommands/ImageListShort 2.48
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 169.82
212 TestMultiNode/serial/PingHostFrom2Pods 3.44
219 TestMultiNode/serial/RestartKeepsNodes 687.74
221 TestMultiNode/serial/StopMultiNode 143.06
228 TestPreload 281.85
234 TestRunningBinaryUpgrade 139.94
242 TestStoppedBinaryUpgrade/Upgrade 306.28
333 TestStartStop/group/embed-certs/serial/Stop 139.73
336 TestStartStop/group/old-k8s-version/serial/Stop 139.59
339 TestStartStop/group/no-preload/serial/Stop 139.74
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.9
343 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
344 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
351 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.26
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.23
353 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.11
354 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.16
355 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 543.43
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 442.02
357 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 311.54
358 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 327.59
365 TestStartStop/group/newest-cni/serial/Stop 140.95
366 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.38
x
+
TestAddons/parallel/Ingress (160.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-361656 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context addons-361656 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (5.673278998s)
addons_test.go:231: (dbg) Run:  kubectl --context addons-361656 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-361656 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5b06d6d2-998c-4fa5-b223-3add010e8e2f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5b06d6d2-998c-4fa5-b223-3add010e8e2f] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.019936398s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-361656 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.766954036s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-361656 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.86
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-361656 addons disable ingress-dns --alsologtostderr -v=1: (1.344716508s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-361656 addons disable ingress --alsologtostderr -v=1: (8.039370866s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-361656 -n addons-361656
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-361656 logs -n 25: (1.456334071s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | -p download-only-526453                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 12 Dec 23 22:03 UTC | 12 Dec 23 22:03 UTC |
	| delete  | -p download-only-526453                                                                     | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:03 UTC | 12 Dec 23 22:03 UTC |
	| delete  | -p download-only-526453                                                                     | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:03 UTC | 12 Dec 23 22:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-207180 | jenkins | v1.32.0 | 12 Dec 23 22:03 UTC |                     |
	|         | binary-mirror-207180                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37499                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-207180                                                                     | binary-mirror-207180 | jenkins | v1.32.0 | 12 Dec 23 22:03 UTC | 12 Dec 23 22:03 UTC |
	| addons  | disable dashboard -p                                                                        | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:03 UTC |                     |
	|         | addons-361656                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:03 UTC |                     |
	|         | addons-361656                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-361656 --wait=true                                                                | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:03 UTC | 12 Dec 23 22:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | addons-361656                                                                               |                      |         |         |                     |                     |
	| addons  | addons-361656 addons disable                                                                | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-361656 ip                                                                            | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	| addons  | addons-361656 addons disable                                                                | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | addons-361656                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | -p addons-361656                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | -p addons-361656                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-361656 ssh curl -s                                                                   | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-361656 addons                                                                        | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-361656 ssh cat                                                                       | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | /opt/local-path-provisioner/pvc-274c2300-c02f-4998-a3f1-15f0e2208ef9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-361656 addons disable                                                                | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-361656 addons                                                                        | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:06 UTC | 12 Dec 23 22:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-361656 addons                                                                        | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:06 UTC | 12 Dec 23 22:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-361656 ip                                                                            | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:07 UTC | 12 Dec 23 22:07 UTC |
	| addons  | addons-361656 addons disable                                                                | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:07 UTC | 12 Dec 23 22:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-361656 addons disable                                                                | addons-361656        | jenkins | v1.32.0 | 12 Dec 23 22:07 UTC | 12 Dec 23 22:08 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:03:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:03:01.346202   84229 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:03:01.346347   84229 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:03:01.346360   84229 out.go:309] Setting ErrFile to fd 2...
	I1212 22:03:01.346365   84229 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:03:01.346577   84229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 22:03:01.347229   84229 out.go:303] Setting JSON to false
	I1212 22:03:01.348096   84229 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9935,"bootTime":1702408646,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:03:01.348158   84229 start.go:138] virtualization: kvm guest
	I1212 22:03:01.350766   84229 out.go:177] * [addons-361656] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:03:01.352605   84229 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:03:01.352601   84229 notify.go:220] Checking for updates...
	I1212 22:03:01.354395   84229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:03:01.355939   84229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:03:01.357466   84229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:03:01.359050   84229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:03:01.360652   84229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:03:01.362484   84229 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:03:01.394988   84229 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 22:03:01.396673   84229 start.go:298] selected driver: kvm2
	I1212 22:03:01.396693   84229 start.go:902] validating driver "kvm2" against <nil>
	I1212 22:03:01.396705   84229 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:03:01.397426   84229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:03:01.397508   84229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:03:01.412841   84229 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:03:01.412888   84229 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:03:01.413095   84229 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:03:01.413152   84229 cni.go:84] Creating CNI manager for ""
	I1212 22:03:01.413164   84229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:03:01.413176   84229 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 22:03:01.413182   84229 start_flags.go:323] config:
	{Name:addons-361656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-361656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:03:01.413341   84229 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:03:01.415369   84229 out.go:177] * Starting control plane node addons-361656 in cluster addons-361656
	I1212 22:03:01.416797   84229 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:03:01.416835   84229 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:03:01.416848   84229 cache.go:56] Caching tarball of preloaded images
	I1212 22:03:01.416950   84229 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:03:01.416964   84229 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:03:01.417307   84229 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/config.json ...
	I1212 22:03:01.417331   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/config.json: {Name:mk5b9e0620b4003e0d57419886eb6048c9d44714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:01.417530   84229 start.go:365] acquiring machines lock for addons-361656: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:03:01.417595   84229 start.go:369] acquired machines lock for "addons-361656" in 44.862µs
	I1212 22:03:01.417619   84229 start.go:93] Provisioning new machine with config: &{Name:addons-361656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-361656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:03:01.417677   84229 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 22:03:01.420515   84229 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1212 22:03:01.420651   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:01.420693   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:01.435590   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I1212 22:03:01.436285   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:01.436999   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:01.437024   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:01.437413   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:01.437644   84229 main.go:141] libmachine: (addons-361656) Calling .GetMachineName
	I1212 22:03:01.437815   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:01.438012   84229 start.go:159] libmachine.API.Create for "addons-361656" (driver="kvm2")
	I1212 22:03:01.438047   84229 client.go:168] LocalClient.Create starting
	I1212 22:03:01.438083   84229 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem
	I1212 22:03:01.614880   84229 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem
	I1212 22:03:01.701914   84229 main.go:141] libmachine: Running pre-create checks...
	I1212 22:03:01.701942   84229 main.go:141] libmachine: (addons-361656) Calling .PreCreateCheck
	I1212 22:03:01.702541   84229 main.go:141] libmachine: (addons-361656) Calling .GetConfigRaw
	I1212 22:03:01.703051   84229 main.go:141] libmachine: Creating machine...
	I1212 22:03:01.703068   84229 main.go:141] libmachine: (addons-361656) Calling .Create
	I1212 22:03:01.703254   84229 main.go:141] libmachine: (addons-361656) Creating KVM machine...
	I1212 22:03:01.704603   84229 main.go:141] libmachine: (addons-361656) DBG | found existing default KVM network
	I1212 22:03:01.705304   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:01.705103   84251 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1212 22:03:01.710910   84229 main.go:141] libmachine: (addons-361656) DBG | trying to create private KVM network mk-addons-361656 192.168.39.0/24...
	I1212 22:03:01.782931   84229 main.go:141] libmachine: (addons-361656) DBG | private KVM network mk-addons-361656 192.168.39.0/24 created
	I1212 22:03:01.782980   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:01.782895   84251 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:03:01.782995   84229 main.go:141] libmachine: (addons-361656) Setting up store path in /home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656 ...
	I1212 22:03:01.783014   84229 main.go:141] libmachine: (addons-361656) Building disk image from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 22:03:01.783035   84229 main.go:141] libmachine: (addons-361656) Downloading /home/jenkins/minikube-integration/17761-76611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 22:03:01.996224   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:01.995992   84251 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa...
	I1212 22:03:02.062911   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:02.062759   84251 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/addons-361656.rawdisk...
	I1212 22:03:02.062948   84229 main.go:141] libmachine: (addons-361656) DBG | Writing magic tar header
	I1212 22:03:02.062959   84229 main.go:141] libmachine: (addons-361656) DBG | Writing SSH key tar header
	I1212 22:03:02.062968   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:02.062911   84251 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656 ...
	I1212 22:03:02.063046   84229 main.go:141] libmachine: (addons-361656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656
	I1212 22:03:02.063093   84229 main.go:141] libmachine: (addons-361656) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656 (perms=drwx------)
	I1212 22:03:02.063112   84229 main.go:141] libmachine: (addons-361656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines
	I1212 22:03:02.063130   84229 main.go:141] libmachine: (addons-361656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:03:02.063145   84229 main.go:141] libmachine: (addons-361656) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines (perms=drwxr-xr-x)
	I1212 22:03:02.063176   84229 main.go:141] libmachine: (addons-361656) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube (perms=drwxr-xr-x)
	I1212 22:03:02.063199   84229 main.go:141] libmachine: (addons-361656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611
	I1212 22:03:02.063208   84229 main.go:141] libmachine: (addons-361656) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611 (perms=drwxrwxr-x)
	I1212 22:03:02.063217   84229 main.go:141] libmachine: (addons-361656) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 22:03:02.063226   84229 main.go:141] libmachine: (addons-361656) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 22:03:02.063257   84229 main.go:141] libmachine: (addons-361656) Creating domain...
	I1212 22:03:02.063272   84229 main.go:141] libmachine: (addons-361656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 22:03:02.063282   84229 main.go:141] libmachine: (addons-361656) DBG | Checking permissions on dir: /home/jenkins
	I1212 22:03:02.063313   84229 main.go:141] libmachine: (addons-361656) DBG | Checking permissions on dir: /home
	I1212 22:03:02.063343   84229 main.go:141] libmachine: (addons-361656) DBG | Skipping /home - not owner
	I1212 22:03:02.064469   84229 main.go:141] libmachine: (addons-361656) define libvirt domain using xml: 
	I1212 22:03:02.064494   84229 main.go:141] libmachine: (addons-361656) <domain type='kvm'>
	I1212 22:03:02.064509   84229 main.go:141] libmachine: (addons-361656)   <name>addons-361656</name>
	I1212 22:03:02.064532   84229 main.go:141] libmachine: (addons-361656)   <memory unit='MiB'>4000</memory>
	I1212 22:03:02.064551   84229 main.go:141] libmachine: (addons-361656)   <vcpu>2</vcpu>
	I1212 22:03:02.064560   84229 main.go:141] libmachine: (addons-361656)   <features>
	I1212 22:03:02.064566   84229 main.go:141] libmachine: (addons-361656)     <acpi/>
	I1212 22:03:02.064577   84229 main.go:141] libmachine: (addons-361656)     <apic/>
	I1212 22:03:02.064611   84229 main.go:141] libmachine: (addons-361656)     <pae/>
	I1212 22:03:02.064638   84229 main.go:141] libmachine: (addons-361656)     
	I1212 22:03:02.064654   84229 main.go:141] libmachine: (addons-361656)   </features>
	I1212 22:03:02.064668   84229 main.go:141] libmachine: (addons-361656)   <cpu mode='host-passthrough'>
	I1212 22:03:02.064680   84229 main.go:141] libmachine: (addons-361656)   
	I1212 22:03:02.064691   84229 main.go:141] libmachine: (addons-361656)   </cpu>
	I1212 22:03:02.064703   84229 main.go:141] libmachine: (addons-361656)   <os>
	I1212 22:03:02.064720   84229 main.go:141] libmachine: (addons-361656)     <type>hvm</type>
	I1212 22:03:02.064740   84229 main.go:141] libmachine: (addons-361656)     <boot dev='cdrom'/>
	I1212 22:03:02.064752   84229 main.go:141] libmachine: (addons-361656)     <boot dev='hd'/>
	I1212 22:03:02.064764   84229 main.go:141] libmachine: (addons-361656)     <bootmenu enable='no'/>
	I1212 22:03:02.064775   84229 main.go:141] libmachine: (addons-361656)   </os>
	I1212 22:03:02.064795   84229 main.go:141] libmachine: (addons-361656)   <devices>
	I1212 22:03:02.064812   84229 main.go:141] libmachine: (addons-361656)     <disk type='file' device='cdrom'>
	I1212 22:03:02.064831   84229 main.go:141] libmachine: (addons-361656)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/boot2docker.iso'/>
	I1212 22:03:02.064845   84229 main.go:141] libmachine: (addons-361656)       <target dev='hdc' bus='scsi'/>
	I1212 22:03:02.064857   84229 main.go:141] libmachine: (addons-361656)       <readonly/>
	I1212 22:03:02.064869   84229 main.go:141] libmachine: (addons-361656)     </disk>
	I1212 22:03:02.064901   84229 main.go:141] libmachine: (addons-361656)     <disk type='file' device='disk'>
	I1212 22:03:02.064926   84229 main.go:141] libmachine: (addons-361656)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 22:03:02.064960   84229 main.go:141] libmachine: (addons-361656)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/addons-361656.rawdisk'/>
	I1212 22:03:02.064978   84229 main.go:141] libmachine: (addons-361656)       <target dev='hda' bus='virtio'/>
	I1212 22:03:02.064992   84229 main.go:141] libmachine: (addons-361656)     </disk>
	I1212 22:03:02.065005   84229 main.go:141] libmachine: (addons-361656)     <interface type='network'>
	I1212 22:03:02.065019   84229 main.go:141] libmachine: (addons-361656)       <source network='mk-addons-361656'/>
	I1212 22:03:02.065034   84229 main.go:141] libmachine: (addons-361656)       <model type='virtio'/>
	I1212 22:03:02.065044   84229 main.go:141] libmachine: (addons-361656)     </interface>
	I1212 22:03:02.065055   84229 main.go:141] libmachine: (addons-361656)     <interface type='network'>
	I1212 22:03:02.065068   84229 main.go:141] libmachine: (addons-361656)       <source network='default'/>
	I1212 22:03:02.065088   84229 main.go:141] libmachine: (addons-361656)       <model type='virtio'/>
	I1212 22:03:02.065104   84229 main.go:141] libmachine: (addons-361656)     </interface>
	I1212 22:03:02.065117   84229 main.go:141] libmachine: (addons-361656)     <serial type='pty'>
	I1212 22:03:02.065129   84229 main.go:141] libmachine: (addons-361656)       <target port='0'/>
	I1212 22:03:02.065141   84229 main.go:141] libmachine: (addons-361656)     </serial>
	I1212 22:03:02.065150   84229 main.go:141] libmachine: (addons-361656)     <console type='pty'>
	I1212 22:03:02.065162   84229 main.go:141] libmachine: (addons-361656)       <target type='serial' port='0'/>
	I1212 22:03:02.065174   84229 main.go:141] libmachine: (addons-361656)     </console>
	I1212 22:03:02.065200   84229 main.go:141] libmachine: (addons-361656)     <rng model='virtio'>
	I1212 22:03:02.065223   84229 main.go:141] libmachine: (addons-361656)       <backend model='random'>/dev/random</backend>
	I1212 22:03:02.065237   84229 main.go:141] libmachine: (addons-361656)     </rng>
	I1212 22:03:02.065246   84229 main.go:141] libmachine: (addons-361656)     
	I1212 22:03:02.065258   84229 main.go:141] libmachine: (addons-361656)     
	I1212 22:03:02.065270   84229 main.go:141] libmachine: (addons-361656)   </devices>
	I1212 22:03:02.065287   84229 main.go:141] libmachine: (addons-361656) </domain>
	I1212 22:03:02.065308   84229 main.go:141] libmachine: (addons-361656) 
	I1212 22:03:02.070080   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:4c:38:4b in network default
	I1212 22:03:02.070716   84229 main.go:141] libmachine: (addons-361656) Ensuring networks are active...
	I1212 22:03:02.070738   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:02.071512   84229 main.go:141] libmachine: (addons-361656) Ensuring network default is active
	I1212 22:03:02.071954   84229 main.go:141] libmachine: (addons-361656) Ensuring network mk-addons-361656 is active
	I1212 22:03:02.072471   84229 main.go:141] libmachine: (addons-361656) Getting domain xml...
	I1212 22:03:02.073179   84229 main.go:141] libmachine: (addons-361656) Creating domain...
	I1212 22:03:03.300638   84229 main.go:141] libmachine: (addons-361656) Waiting to get IP...
	I1212 22:03:03.301471   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:03.301862   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:03.301889   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:03.301850   84251 retry.go:31] will retry after 204.582203ms: waiting for machine to come up
	I1212 22:03:03.508428   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:03.509058   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:03.509083   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:03.508960   84251 retry.go:31] will retry after 292.889283ms: waiting for machine to come up
	I1212 22:03:03.803706   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:03.804116   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:03.804144   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:03.804050   84251 retry.go:31] will retry after 407.602482ms: waiting for machine to come up
	I1212 22:03:04.213809   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:04.214351   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:04.214380   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:04.214289   84251 retry.go:31] will retry after 385.172128ms: waiting for machine to come up
	I1212 22:03:04.601077   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:04.601688   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:04.601727   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:04.601676   84251 retry.go:31] will retry after 582.6214ms: waiting for machine to come up
	I1212 22:03:05.185443   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:05.185944   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:05.185974   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:05.185891   84251 retry.go:31] will retry after 881.168943ms: waiting for machine to come up
	I1212 22:03:06.069279   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:06.069784   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:06.069811   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:06.069747   84251 retry.go:31] will retry after 894.03025ms: waiting for machine to come up
	I1212 22:03:06.965495   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:06.965839   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:06.965873   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:06.965771   84251 retry.go:31] will retry after 1.33116679s: waiting for machine to come up
	I1212 22:03:08.299286   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:08.299705   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:08.299735   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:08.299637   84251 retry.go:31] will retry after 1.176150855s: waiting for machine to come up
	I1212 22:03:09.477258   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:09.477655   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:09.477686   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:09.477600   84251 retry.go:31] will retry after 1.682978043s: waiting for machine to come up
	I1212 22:03:11.162541   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:11.163011   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:11.163034   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:11.162966   84251 retry.go:31] will retry after 2.399185345s: waiting for machine to come up
	I1212 22:03:13.564479   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:13.564976   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:13.565012   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:13.564885   84251 retry.go:31] will retry after 2.559663997s: waiting for machine to come up
	I1212 22:03:16.125632   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:16.126049   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:16.126074   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:16.126002   84251 retry.go:31] will retry after 3.62228261s: waiting for machine to come up
	I1212 22:03:19.752132   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:19.752566   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find current IP address of domain addons-361656 in network mk-addons-361656
	I1212 22:03:19.752594   84229 main.go:141] libmachine: (addons-361656) DBG | I1212 22:03:19.752509   84251 retry.go:31] will retry after 3.855770406s: waiting for machine to come up
	I1212 22:03:23.611431   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:23.611897   84229 main.go:141] libmachine: (addons-361656) Found IP for machine: 192.168.39.86
	I1212 22:03:23.611911   84229 main.go:141] libmachine: (addons-361656) Reserving static IP address...
	I1212 22:03:23.611964   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has current primary IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:23.612257   84229 main.go:141] libmachine: (addons-361656) DBG | unable to find host DHCP lease matching {name: "addons-361656", mac: "52:54:00:9c:1e:2f", ip: "192.168.39.86"} in network mk-addons-361656
	I1212 22:03:23.684176   84229 main.go:141] libmachine: (addons-361656) DBG | Getting to WaitForSSH function...
	I1212 22:03:23.684217   84229 main.go:141] libmachine: (addons-361656) Reserved static IP address: 192.168.39.86
	I1212 22:03:23.684230   84229 main.go:141] libmachine: (addons-361656) Waiting for SSH to be available...
	I1212 22:03:23.686674   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:23.686986   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:23.687016   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:23.687216   84229 main.go:141] libmachine: (addons-361656) DBG | Using SSH client type: external
	I1212 22:03:23.687264   84229 main.go:141] libmachine: (addons-361656) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa (-rw-------)
	I1212 22:03:23.687303   84229 main.go:141] libmachine: (addons-361656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 22:03:23.687318   84229 main.go:141] libmachine: (addons-361656) DBG | About to run SSH command:
	I1212 22:03:23.687333   84229 main.go:141] libmachine: (addons-361656) DBG | exit 0
	I1212 22:03:23.783277   84229 main.go:141] libmachine: (addons-361656) DBG | SSH cmd err, output: <nil>: 
	I1212 22:03:23.783556   84229 main.go:141] libmachine: (addons-361656) KVM machine creation complete!
	I1212 22:03:23.783893   84229 main.go:141] libmachine: (addons-361656) Calling .GetConfigRaw
	I1212 22:03:23.784466   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:23.784688   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:23.784872   84229 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 22:03:23.784894   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:23.786247   84229 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 22:03:23.786264   84229 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 22:03:23.786272   84229 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 22:03:23.786282   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:23.788736   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:23.789113   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:23.789141   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:23.789321   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:23.789474   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:23.789616   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:23.789713   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:23.789891   84229 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:23.790374   84229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1212 22:03:23.790390   84229 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 22:03:23.918602   84229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:03:23.918629   84229 main.go:141] libmachine: Detecting the provisioner...
	I1212 22:03:23.918639   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:23.921268   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:23.921593   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:23.921635   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:23.921799   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:23.921987   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:23.922142   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:23.922293   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:23.922457   84229 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:23.922808   84229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1212 22:03:23.922822   84229 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 22:03:24.052273   84229 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g161fa11-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 22:03:24.052415   84229 main.go:141] libmachine: found compatible host: buildroot
	I1212 22:03:24.052435   84229 main.go:141] libmachine: Provisioning with buildroot...
	I1212 22:03:24.052448   84229 main.go:141] libmachine: (addons-361656) Calling .GetMachineName
	I1212 22:03:24.052684   84229 buildroot.go:166] provisioning hostname "addons-361656"
	I1212 22:03:24.052709   84229 main.go:141] libmachine: (addons-361656) Calling .GetMachineName
	I1212 22:03:24.052911   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:24.055297   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.055726   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:24.055764   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.055898   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:24.056097   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:24.056246   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:24.056361   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:24.056516   84229 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:24.056818   84229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1212 22:03:24.056832   84229 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-361656 && echo "addons-361656" | sudo tee /etc/hostname
	I1212 22:03:24.195689   84229 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-361656
	
	I1212 22:03:24.195723   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:24.198441   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.198747   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:24.198768   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.198925   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:24.199140   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:24.199343   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:24.199524   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:24.199679   84229 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:24.200050   84229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1212 22:03:24.200076   84229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-361656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-361656/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-361656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:03:24.335549   84229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:03:24.335583   84229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 22:03:24.335620   84229 buildroot.go:174] setting up certificates
	I1212 22:03:24.335656   84229 provision.go:83] configureAuth start
	I1212 22:03:24.335670   84229 main.go:141] libmachine: (addons-361656) Calling .GetMachineName
	I1212 22:03:24.335943   84229 main.go:141] libmachine: (addons-361656) Calling .GetIP
	I1212 22:03:24.338223   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.338612   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:24.338633   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.338789   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:24.341510   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.341843   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:24.341872   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.342030   84229 provision.go:138] copyHostCerts
	I1212 22:03:24.342103   84229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 22:03:24.342238   84229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 22:03:24.342320   84229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 22:03:24.342394   84229 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.addons-361656 san=[192.168.39.86 192.168.39.86 localhost 127.0.0.1 minikube addons-361656]
	I1212 22:03:24.495453   84229 provision.go:172] copyRemoteCerts
	I1212 22:03:24.495553   84229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:03:24.495587   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:24.498181   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.498473   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:24.498500   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.498631   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:24.498811   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:24.498991   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:24.499168   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:24.593596   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:03:24.620297   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 22:03:24.645847   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 22:03:24.668781   84229 provision.go:86] duration metric: configureAuth took 333.108315ms
	I1212 22:03:24.668804   84229 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:03:24.668986   84229 config.go:182] Loaded profile config "addons-361656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:24.669059   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:24.671625   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.671985   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:24.672012   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:24.672128   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:24.672338   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:24.672617   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:24.672822   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:24.672993   84229 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:24.673320   84229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1212 22:03:24.673337   84229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:03:24.997950   84229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:03:24.997986   84229 main.go:141] libmachine: Checking connection to Docker...
	I1212 22:03:24.998026   84229 main.go:141] libmachine: (addons-361656) Calling .GetURL
	I1212 22:03:24.999536   84229 main.go:141] libmachine: (addons-361656) DBG | Using libvirt version 6000000
	I1212 22:03:25.001594   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.002002   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:25.002047   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.002185   84229 main.go:141] libmachine: Docker is up and running!
	I1212 22:03:25.002203   84229 main.go:141] libmachine: Reticulating splines...
	I1212 22:03:25.002210   84229 client.go:171] LocalClient.Create took 23.564155752s
	I1212 22:03:25.002229   84229 start.go:167] duration metric: libmachine.API.Create for "addons-361656" took 23.564221878s
	I1212 22:03:25.002248   84229 start.go:300] post-start starting for "addons-361656" (driver="kvm2")
	I1212 22:03:25.002260   84229 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:03:25.002275   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:25.002532   84229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:03:25.002560   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:25.004595   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.004931   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:25.004950   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.005092   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:25.005295   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:25.005462   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:25.005574   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:25.097983   84229 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:03:25.102365   84229 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:03:25.102386   84229 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 22:03:25.102458   84229 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 22:03:25.102482   84229 start.go:303] post-start completed in 100.226148ms
	I1212 22:03:25.102514   84229 main.go:141] libmachine: (addons-361656) Calling .GetConfigRaw
	I1212 22:03:25.103042   84229 main.go:141] libmachine: (addons-361656) Calling .GetIP
	I1212 22:03:25.106618   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.106942   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:25.106974   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.107192   84229 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/config.json ...
	I1212 22:03:25.107438   84229 start.go:128] duration metric: createHost completed in 23.689747563s
	I1212 22:03:25.107468   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:25.109552   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.109833   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:25.109864   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.109990   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:25.110162   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:25.110348   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:25.110457   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:25.110603   84229 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:25.110908   84229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1212 22:03:25.110920   84229 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:03:25.236005   84229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702418605.213816803
	
	I1212 22:03:25.236035   84229 fix.go:206] guest clock: 1702418605.213816803
	I1212 22:03:25.236057   84229 fix.go:219] Guest: 2023-12-12 22:03:25.213816803 +0000 UTC Remote: 2023-12-12 22:03:25.107454112 +0000 UTC m=+23.811651762 (delta=106.362691ms)
	I1212 22:03:25.236082   84229 fix.go:190] guest clock delta is within tolerance: 106.362691ms
	I1212 22:03:25.236089   84229 start.go:83] releasing machines lock for "addons-361656", held for 23.818481779s
	I1212 22:03:25.236117   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:25.236415   84229 main.go:141] libmachine: (addons-361656) Calling .GetIP
	I1212 22:03:25.238937   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.239297   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:25.239337   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.239469   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:25.240085   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:25.240250   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:25.240349   84229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:03:25.240388   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:25.240535   84229 ssh_runner.go:195] Run: cat /version.json
	I1212 22:03:25.240562   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:25.242960   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.243154   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.243321   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:25.243349   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.243489   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:25.243492   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:25.243510   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:25.243650   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:25.243669   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:25.243818   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:25.243887   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:25.243966   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:25.244075   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:25.244103   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:25.362939   84229 ssh_runner.go:195] Run: systemctl --version
	I1212 22:03:25.368853   84229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:03:25.532427   84229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 22:03:25.538428   84229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:03:25.538492   84229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:03:25.558192   84229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:03:25.558221   84229 start.go:475] detecting cgroup driver to use...
	I1212 22:03:25.558307   84229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:03:25.573468   84229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:03:25.586940   84229 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:03:25.587022   84229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:03:25.600709   84229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:03:25.614586   84229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:03:25.718035   84229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:03:25.833441   84229 docker.go:219] disabling docker service ...
	I1212 22:03:25.833525   84229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:03:25.847357   84229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:03:25.858313   84229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:03:25.957847   84229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:03:26.059489   84229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:03:26.073418   84229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:03:26.094230   84229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:03:26.094293   84229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:26.103816   84229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:03:26.103895   84229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:26.113361   84229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:26.122835   84229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:26.132368   84229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:03:26.142226   84229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:03:26.150470   84229 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:03:26.150544   84229 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 22:03:26.163459   84229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:03:26.172322   84229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:03:26.274948   84229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:03:26.452484   84229 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:03:26.452587   84229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:03:26.458240   84229 start.go:543] Will wait 60s for crictl version
	I1212 22:03:26.458323   84229 ssh_runner.go:195] Run: which crictl
	I1212 22:03:26.462336   84229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:03:26.511474   84229 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 22:03:26.511589   84229 ssh_runner.go:195] Run: crio --version
	I1212 22:03:26.560378   84229 ssh_runner.go:195] Run: crio --version
	I1212 22:03:26.611729   84229 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 22:03:26.613399   84229 main.go:141] libmachine: (addons-361656) Calling .GetIP
	I1212 22:03:26.616202   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:26.616590   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:26.616615   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:26.616793   84229 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 22:03:26.621260   84229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:03:26.634433   84229 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:03:26.634510   84229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:03:26.668385   84229 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 22:03:26.668479   84229 ssh_runner.go:195] Run: which lz4
	I1212 22:03:26.672499   84229 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 22:03:26.676678   84229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:03:26.676711   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 22:03:28.445489   84229 crio.go:444] Took 1.773042 seconds to copy over tarball
	I1212 22:03:28.445565   84229 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 22:03:31.624001   84229 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.178403967s)
	I1212 22:03:31.624038   84229 crio.go:451] Took 3.178521 seconds to extract the tarball
	I1212 22:03:31.624048   84229 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 22:03:31.666186   84229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:03:31.735795   84229 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:03:31.735821   84229 cache_images.go:84] Images are preloaded, skipping loading
	I1212 22:03:31.735921   84229 ssh_runner.go:195] Run: crio config
	I1212 22:03:31.802036   84229 cni.go:84] Creating CNI manager for ""
	I1212 22:03:31.802058   84229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:03:31.802076   84229 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:03:31.802098   84229 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-361656 NodeName:addons-361656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:03:31.802225   84229 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-361656"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:03:31.802325   84229 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-361656 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-361656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:03:31.802412   84229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:03:31.811105   84229 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:03:31.811195   84229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:03:31.819660   84229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1212 22:03:31.835881   84229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:03:31.851975   84229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1212 22:03:31.867426   84229 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I1212 22:03:31.870871   84229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:03:31.882827   84229 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656 for IP: 192.168.39.86
	I1212 22:03:31.882871   84229 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:31.883021   84229 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 22:03:32.101574   84229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt ...
	I1212 22:03:32.101605   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt: {Name:mk94c73b728f8b13d2926e6b95853e808186d897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.101766   84229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key ...
	I1212 22:03:32.101779   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key: {Name:mk1f314481754f198daba7f20c7be081dbd76a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.101859   84229 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 22:03:32.187857   84229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt ...
	I1212 22:03:32.187887   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt: {Name:mk21e2498c254772214fc94b18b64d46389ebdca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.188032   84229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key ...
	I1212 22:03:32.188042   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key: {Name:mk4bc14b6a70c28c8a87624002d813a5f113f4c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.188142   84229 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.key
	I1212 22:03:32.188155   84229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt with IP's: []
	I1212 22:03:32.326264   84229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt ...
	I1212 22:03:32.326293   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: {Name:mk9658ebd85836c0021c395fbafce7babff8fdef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.326440   84229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.key ...
	I1212 22:03:32.326451   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.key: {Name:mke77b0a3903443efc8bf263db6d7a21231f2ac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.326518   84229 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.key.966dc577
	I1212 22:03:32.326534   84229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.crt.966dc577 with IP's: [192.168.39.86 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:03:32.477521   84229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.crt.966dc577 ...
	I1212 22:03:32.477573   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.crt.966dc577: {Name:mk8215f9d5d933b24afbb87033e46b787cb3f004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.477808   84229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.key.966dc577 ...
	I1212 22:03:32.477827   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.key.966dc577: {Name:mkc1fe8830f3a065a2fc3ca428d26aea3331fee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.477919   84229 certs.go:337] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.crt.966dc577 -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.crt
	I1212 22:03:32.478020   84229 certs.go:341] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.key.966dc577 -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.key
	I1212 22:03:32.478083   84229 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/proxy-client.key
	I1212 22:03:32.478105   84229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/proxy-client.crt with IP's: []
	I1212 22:03:32.537387   84229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/proxy-client.crt ...
	I1212 22:03:32.537416   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/proxy-client.crt: {Name:mk490dab787400b2c4f3531e542226ce359e3489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.537581   84229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/proxy-client.key ...
	I1212 22:03:32.537599   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/proxy-client.key: {Name:mk17138f16fe6bbd076aa18d8a9f9b2b216666e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:32.537800   84229 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 22:03:32.537844   84229 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:03:32.537890   84229 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:03:32.537935   84229 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 22:03:32.538672   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:03:32.563261   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 22:03:32.586971   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:03:32.613182   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:03:32.635099   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:03:32.657791   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 22:03:32.680502   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:03:32.704363   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 22:03:32.727142   84229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:03:32.750007   84229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:03:32.765914   84229 ssh_runner.go:195] Run: openssl version
	I1212 22:03:32.771184   84229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:03:32.781161   84229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:32.785599   84229 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:32.785664   84229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:32.791109   84229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:03:32.803005   84229 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:03:32.807091   84229 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:03:32.807144   84229 kubeadm.go:404] StartCluster: {Name:addons-361656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-361656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:03:32.807264   84229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:03:32.807353   84229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:03:32.846922   84229 cri.go:89] found id: ""
	I1212 22:03:32.846997   84229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:03:32.856077   84229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:03:32.864683   84229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:03:32.872868   84229 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:03:32.872911   84229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 22:03:32.927709   84229 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 22:03:32.927787   84229 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:03:33.074365   84229 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:03:33.074500   84229 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:03:33.074651   84229 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:03:33.289310   84229 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:03:33.380830   84229 out.go:204]   - Generating certificates and keys ...
	I1212 22:03:33.380996   84229 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:03:33.381077   84229 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:03:33.560404   84229 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:03:33.796288   84229 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:03:33.844606   84229 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:03:34.034700   84229 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:03:34.219115   84229 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:03:34.219300   84229 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-361656 localhost] and IPs [192.168.39.86 127.0.0.1 ::1]
	I1212 22:03:34.336930   84229 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:03:34.337076   84229 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-361656 localhost] and IPs [192.168.39.86 127.0.0.1 ::1]
	I1212 22:03:34.528068   84229 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:03:34.642042   84229 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:03:34.720558   84229 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:03:34.720692   84229 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:03:34.905625   84229 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:03:35.027625   84229 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:03:35.113956   84229 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:03:35.206883   84229 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:03:35.207533   84229 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:03:35.212748   84229 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:03:35.284355   84229 out.go:204]   - Booting up control plane ...
	I1212 22:03:35.284536   84229 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:03:35.284681   84229 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:03:35.284812   84229 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:03:35.284956   84229 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:03:35.285089   84229 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:03:35.285152   84229 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:03:35.375326   84229 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:03:42.874827   84229 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.501772 seconds
	I1212 22:03:42.874985   84229 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:03:42.900498   84229 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:03:43.444397   84229 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:03:43.444635   84229 kubeadm.go:322] [mark-control-plane] Marking the node addons-361656 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:03:43.960191   84229 kubeadm.go:322] [bootstrap-token] Using token: igw3i3.6cfv78cc3idmgdca
	I1212 22:03:43.961670   84229 out.go:204]   - Configuring RBAC rules ...
	I1212 22:03:43.961805   84229 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:03:43.968491   84229 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:03:43.977117   84229 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:03:43.981266   84229 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:03:43.988433   84229 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:03:43.992171   84229 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:03:44.006986   84229 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:03:44.295213   84229 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:03:44.373819   84229 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:03:44.374839   84229 kubeadm.go:322] 
	I1212 22:03:44.374960   84229 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:03:44.374983   84229 kubeadm.go:322] 
	I1212 22:03:44.375067   84229 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:03:44.375075   84229 kubeadm.go:322] 
	I1212 22:03:44.375102   84229 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:03:44.375193   84229 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:03:44.375291   84229 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:03:44.375302   84229 kubeadm.go:322] 
	I1212 22:03:44.375397   84229 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 22:03:44.375417   84229 kubeadm.go:322] 
	I1212 22:03:44.375493   84229 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:03:44.375506   84229 kubeadm.go:322] 
	I1212 22:03:44.375560   84229 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:03:44.375651   84229 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:03:44.375742   84229 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:03:44.375751   84229 kubeadm.go:322] 
	I1212 22:03:44.375864   84229 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:03:44.375961   84229 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:03:44.375975   84229 kubeadm.go:322] 
	I1212 22:03:44.376092   84229 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token igw3i3.6cfv78cc3idmgdca \
	I1212 22:03:44.376237   84229 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 22:03:44.376268   84229 kubeadm.go:322] 	--control-plane 
	I1212 22:03:44.376277   84229 kubeadm.go:322] 
	I1212 22:03:44.376391   84229 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:03:44.376401   84229 kubeadm.go:322] 
	I1212 22:03:44.376514   84229 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token igw3i3.6cfv78cc3idmgdca \
	I1212 22:03:44.376641   84229 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 22:03:44.377207   84229 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:03:44.377244   84229 cni.go:84] Creating CNI manager for ""
	I1212 22:03:44.377259   84229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:03:44.379319   84229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 22:03:44.380944   84229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 22:03:44.409668   84229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 22:03:44.490724   84229 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:03:44.490818   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:44.490852   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=addons-361656 minikube.k8s.io/updated_at=2023_12_12T22_03_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:44.729713   84229 ops.go:34] apiserver oom_adj: -16
	I1212 22:03:44.805534   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:44.908855   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:45.513587   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:46.013023   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:46.513630   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:47.013332   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:47.513249   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:48.013914   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:48.513480   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:49.013106   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:49.513633   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:50.013319   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:50.513783   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:51.013748   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:51.513628   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:52.013281   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:52.513243   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:53.013738   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:53.513583   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:54.013953   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:54.512964   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:55.013234   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:55.513714   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:56.013344   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:56.513709   84229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:56.654025   84229 kubeadm.go:1088] duration metric: took 12.163271463s to wait for elevateKubeSystemPrivileges.
	I1212 22:03:56.654068   84229 kubeadm.go:406] StartCluster complete in 23.846929353s
	I1212 22:03:56.654106   84229 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:56.654246   84229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:03:56.654664   84229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:56.654875   84229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:03:56.655025   84229 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 22:03:56.655128   84229 addons.go:69] Setting gcp-auth=true in profile "addons-361656"
	I1212 22:03:56.655174   84229 addons.go:69] Setting storage-provisioner=true in profile "addons-361656"
	I1212 22:03:56.655176   84229 addons.go:69] Setting ingress=true in profile "addons-361656"
	I1212 22:03:56.655203   84229 addons.go:231] Setting addon storage-provisioner=true in "addons-361656"
	I1212 22:03:56.655192   84229 addons.go:69] Setting metrics-server=true in profile "addons-361656"
	I1212 22:03:56.655235   84229 addons.go:69] Setting default-storageclass=true in profile "addons-361656"
	I1212 22:03:56.655272   84229 addons.go:231] Setting addon metrics-server=true in "addons-361656"
	I1212 22:03:56.655300   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.655318   84229 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-361656"
	I1212 22:03:56.655146   84229 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-361656"
	I1212 22:03:56.655329   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.655340   84229 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-361656"
	I1212 22:03:56.655402   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.655701   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.655755   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.655790   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.655215   84229 addons.go:231] Setting addon ingress=true in "addons-361656"
	I1212 22:03:56.655165   84229 addons.go:69] Setting registry=true in profile "addons-361656"
	I1212 22:03:56.655823   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.655837   84229 addons.go:231] Setting addon registry=true in "addons-361656"
	I1212 22:03:56.655854   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.655895   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.655792   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.656102   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.656158   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.656189   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.656246   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.656290   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.655210   84229 mustload.go:65] Loading cluster: addons-361656
	I1212 22:03:56.655213   84229 config.go:182] Loaded profile config "addons-361656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:56.655222   84229 addons.go:69] Setting helm-tiller=true in profile "addons-361656"
	I1212 22:03:56.656484   84229 addons.go:231] Setting addon helm-tiller=true in "addons-361656"
	I1212 22:03:56.656488   84229 config.go:182] Loaded profile config "addons-361656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:56.656519   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.655224   84229 addons.go:69] Setting cloud-spanner=true in profile "addons-361656"
	I1212 22:03:56.656601   84229 addons.go:231] Setting addon cloud-spanner=true in "addons-361656"
	I1212 22:03:56.656646   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.656819   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.656829   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.656838   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.656846   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.656993   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.657035   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.655226   84229 addons.go:69] Setting ingress-dns=true in profile "addons-361656"
	I1212 22:03:56.657088   84229 addons.go:231] Setting addon ingress-dns=true in "addons-361656"
	I1212 22:03:56.657159   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.655135   84229 addons.go:69] Setting volumesnapshots=true in profile "addons-361656"
	I1212 22:03:56.657335   84229 addons.go:231] Setting addon volumesnapshots=true in "addons-361656"
	I1212 22:03:56.657381   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.657517   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.657535   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.657716   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.657745   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.655231   84229 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-361656"
	I1212 22:03:56.659548   84229 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-361656"
	I1212 22:03:56.659591   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.659939   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.659968   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.655235   84229 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-361656"
	I1212 22:03:56.663419   84229 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-361656"
	I1212 22:03:56.655249   84229 addons.go:69] Setting inspektor-gadget=true in profile "addons-361656"
	I1212 22:03:56.664035   84229 addons.go:231] Setting addon inspektor-gadget=true in "addons-361656"
	I1212 22:03:56.664080   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.655798   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.664414   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.664433   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.664443   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.675217   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I1212 22:03:56.675694   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.676284   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.676315   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.676716   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.677275   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.677317   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.679062   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I1212 22:03:56.679560   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.680067   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.680104   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.680516   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.681073   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.681104   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.681826   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I1212 22:03:56.682155   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.682336   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I1212 22:03:56.682763   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.682784   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.682823   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.683109   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.683361   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.683378   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.683874   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.683915   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.684678   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.686502   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I1212 22:03:56.686979   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.687208   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.687269   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.687545   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.687570   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.687906   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.688404   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.688447   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.691189   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35015
	I1212 22:03:56.691836   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.691881   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.693535   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I1212 22:03:56.693694   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.693886   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.694204   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.694223   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.694357   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.694375   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.694561   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.694757   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.695044   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.695574   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.695613   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.697792   84229 addons.go:231] Setting addon default-storageclass=true in "addons-361656"
	I1212 22:03:56.697833   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.698228   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.698257   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.704586   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I1212 22:03:56.704792   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I1212 22:03:56.704949   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.705292   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.705417   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I1212 22:03:56.705666   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.705715   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.706283   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.706314   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.706499   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.706747   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.706981   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.707000   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.707153   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.707363   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.707399   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.707454   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.707903   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.707952   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.709742   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.709962   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.710319   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.710339   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.713281   84229 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 22:03:56.714659   84229 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:03:56.714678   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 22:03:56.714699   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.717685   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.718038   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.718063   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.718262   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.718476   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.718542   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I1212 22:03:56.718807   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.719003   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.719607   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.720138   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.720157   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.720622   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.720907   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.723800   84229 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-361656"
	I1212 22:03:56.723872   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:03:56.724324   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.724368   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.730812   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1212 22:03:56.731314   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.733167   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I1212 22:03:56.733353   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I1212 22:03:56.733681   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I1212 22:03:56.733700   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.734006   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.734026   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.734043   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.734317   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.734342   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.734555   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.734580   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.734651   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.734716   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.734765   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I1212 22:03:56.735173   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.735345   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.735366   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.735431   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.735844   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.735870   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.735947   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.735967   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.735996   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.736158   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.736174   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.736384   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.736427   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.736877   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.736911   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.737845   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44389
	I1212 22:03:56.738173   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.738608   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.738632   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.738740   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.738792   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1212 22:03:56.738995   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.739065   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.741274   84229 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1212 22:03:56.739691   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.739803   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.740661   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.742534   84229 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1212 22:03:56.742546   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1212 22:03:56.742562   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.742625   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.744138   84229 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 22:03:56.742819   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.745036   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.745566   84229 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 22:03:56.745589   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 22:03:56.745612   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.745802   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I1212 22:03:56.747281   84229 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:03:56.746053   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.746342   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.748434   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.748772   84229 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:03:56.748789   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:03:56.748805   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.748867   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.749297   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.749329   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.750036   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.750042   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.750117   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.750133   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.750154   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.750249   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.750419   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.750619   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.750673   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.750629   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.751205   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.751287   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.751337   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.751351   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.751638   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.753253   84229 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 22:03:56.751968   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.753081   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.753626   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.756336   84229 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 22:03:56.754871   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.755014   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.755043   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.756198   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I1212 22:03:56.756628   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I1212 22:03:56.757623   84229 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 22:03:56.757638   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 22:03:56.757660   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.757718   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.759664   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.759882   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.760185   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.762243   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
	I1212 22:03:56.762357   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.762400   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.762423   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.762665   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.762717   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.762965   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.763281   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.763301   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.763323   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.763369   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.763387   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.764127   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.764670   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.764709   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.765835   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.765843   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.767977   84229 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 22:03:56.766304   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.766475   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.766662   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I1212 22:03:56.767028   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36129
	I1212 22:03:56.768913   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I1212 22:03:56.769707   84229 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:03:56.769730   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 22:03:56.769750   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.769710   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.770120   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.770190   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.770648   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.770701   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.770734   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.770746   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.770764   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.771290   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.771331   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.771410   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.770621   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.771445   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.771714   84229 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-361656" context rescaled to 1 replicas
	I1212 22:03:56.771750   84229 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:03:56.773517   84229 out.go:177] * Verifying Kubernetes components...
	I1212 22:03:56.771953   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.772056   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.772339   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.773120   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.773921   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.775007   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.775050   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.775109   84229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:03:56.775161   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.775223   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.775265   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.775365   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.775567   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.775849   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:03:56.775891   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:03:56.775999   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.777671   84229 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 22:03:56.776632   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.777904   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.779026   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I1212 22:03:56.779198   84229 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:03:56.780730   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.780743   84229 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:03:56.779581   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.780929   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I1212 22:03:56.783756   84229 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 22:03:56.782512   84229 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:03:56.782740   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.782877   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.785324   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.785328   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 22:03:56.785359   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.785383   84229 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 22:03:56.785408   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 22:03:56.785427   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.785786   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.785936   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.785961   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.785988   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.786347   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.786569   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.788176   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.788412   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I1212 22:03:56.790165   84229 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 22:03:56.789003   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.789019   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.789856   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.790501   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.791277   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.791686   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.791705   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.791759   84229 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 22:03:56.791767   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 22:03:56.791782   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.791837   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.791890   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.791910   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.792016   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.793638   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 22:03:56.792610   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.792635   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.792751   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.794931   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.794937   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.796231   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 22:03:56.797685   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 22:03:56.795331   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.795372   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.795380   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.795555   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.795707   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.800483   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 22:03:56.799318   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.799322   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.799328   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.799471   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.800725   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.800896   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.802332   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 22:03:56.802921   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.804072   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 22:03:56.805569   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 22:03:56.807002   84229 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 22:03:56.807024   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 22:03:56.805501   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 22:03:56.807044   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.805672   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I1212 22:03:56.806530   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I1212 22:03:56.810376   84229 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 22:03:56.809167   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.809689   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:03:56.811985   84229 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 22:03:56.812010   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 22:03:56.812033   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.810067   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.810697   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.811092   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.811112   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:03:56.812171   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.812177   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:03:56.812072   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.812237   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.812304   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.812462   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.812525   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.812561   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.813319   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:03:56.813947   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.814164   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:03:56.814986   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.815496   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.815516   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.815632   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.815760   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.817606   84229 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 22:03:56.815958   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.816002   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:03:56.820435   84229 out.go:177]   - Using image docker.io/busybox:stable
	I1212 22:03:56.819342   84229 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:03:56.819444   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.821848   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:03:56.821866   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.821909   84229 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:03:56.821925   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 22:03:56.821943   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:03:56.822027   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	W1212 22:03:56.823135   84229 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47578->192.168.39.86:22: read: connection reset by peer
	I1212 22:03:56.823219   84229 retry.go:31] will retry after 253.225039ms: ssh: handshake failed: read tcp 192.168.39.1:47578->192.168.39.86:22: read: connection reset by peer
	I1212 22:03:56.825374   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.825428   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.825788   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.825824   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:03:56.825868   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.825890   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:03:56.826006   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.826138   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:03:56.826200   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.826281   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:03:56.826326   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.826414   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:03:56.826456   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:03:56.826838   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	W1212 22:03:56.827538   84229 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47594->192.168.39.86:22: read: connection reset by peer
	I1212 22:03:56.827565   84229 retry.go:31] will retry after 313.154216ms: ssh: handshake failed: read tcp 192.168.39.1:47594->192.168.39.86:22: read: connection reset by peer
	W1212 22:03:56.828097   84229 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47610->192.168.39.86:22: read: connection reset by peer
	I1212 22:03:56.828114   84229 retry.go:31] will retry after 179.38307ms: ssh: handshake failed: read tcp 192.168.39.1:47610->192.168.39.86:22: read: connection reset by peer
	I1212 22:03:56.972445   84229 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1212 22:03:56.972481   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1212 22:03:57.001443   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:03:57.018848   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 22:03:57.040474   84229 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:03:57.040505   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1212 22:03:57.046645   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:03:57.062909   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:03:57.100177   84229 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 22:03:57.100225   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 22:03:57.105213   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:03:57.129798   84229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:03:57.130505   84229 node_ready.go:35] waiting up to 6m0s for node "addons-361656" to be "Ready" ...
	I1212 22:03:57.180574   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:03:57.192878   84229 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 22:03:57.192909   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 22:03:57.232975   84229 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 22:03:57.233008   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 22:03:57.268042   84229 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 22:03:57.268071   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 22:03:57.352861   84229 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 22:03:57.352892   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 22:03:57.367037   84229 node_ready.go:49] node "addons-361656" has status "Ready":"True"
	I1212 22:03:57.367068   84229 node_ready.go:38] duration metric: took 236.539838ms waiting for node "addons-361656" to be "Ready" ...
	I1212 22:03:57.367079   84229 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:03:57.384553   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:03:57.421976   84229 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:03:57.422007   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 22:03:57.467742   84229 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 22:03:57.467775   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 22:03:57.475781   84229 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 22:03:57.475810   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 22:03:57.514863   84229 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 22:03:57.514903   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 22:03:57.538458   84229 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:03:57.538490   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 22:03:57.642497   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:03:57.646993   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:03:57.695196   84229 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 22:03:57.695221   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 22:03:57.749406   84229 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 22:03:57.749438   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 22:03:57.749672   84229 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 22:03:57.749687   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 22:03:57.755639   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:03:57.824873   84229 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 22:03:57.824905   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 22:03:57.879121   84229 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 22:03:57.879148   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 22:03:57.909566   84229 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 22:03:57.909593   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 22:03:57.938822   84229 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 22:03:57.938848   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 22:03:57.950569   84229 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:03:57.950598   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 22:03:57.987819   84229 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 22:03:57.987846   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 22:03:58.002423   84229 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 22:03:58.002451   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 22:03:58.011616   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:03:58.045996   84229 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4nq5c" in "kube-system" namespace to be "Ready" ...
	I1212 22:03:58.077510   84229 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 22:03:58.077544   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 22:03:58.100038   84229 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 22:03:58.100060   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 22:03:58.145489   84229 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:03:58.145514   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 22:03:58.172213   84229 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 22:03:58.172245   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 22:03:58.225614   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:03:58.255668   84229 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 22:03:58.255697   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 22:03:58.316205   84229 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 22:03:58.316232   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 22:03:58.369811   84229 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:03:58.369844   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 22:03:58.415596   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:04:00.676182   84229 pod_ready.go:102] pod "coredns-5dd5756b68-4nq5c" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:02.945645   84229 pod_ready.go:102] pod "coredns-5dd5756b68-4nq5c" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:03.236802   84229 pod_ready.go:97] pod "coredns-5dd5756b68-4nq5c" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:03:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:03:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:03:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:03:57 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.86 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-12-12 22:03:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2
,Signal:0,Reason:Error,Message:,StartedAt:2023-12-12 22:04:01 +0000 UTC,FinishedAt:2023-12-12 22:04:01 +0000 UTC,ContainerID:cri-o://2a4d9b03d388d413f5798d9014c2be17e3038686446fd80927367e5b1f9b9ec9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://2a4d9b03d388d413f5798d9014c2be17e3038686446fd80927367e5b1f9b9ec9 Started:0xc00345eadc AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 22:04:03.236839   84229 pod_ready.go:81] duration metric: took 5.19080103s waiting for pod "coredns-5dd5756b68-4nq5c" in "kube-system" namespace to be "Ready" ...
	E1212 22:04:03.236857   84229 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-4nq5c" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:03:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:03:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:03:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:03:57 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.86 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-12-12 22:03:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&Contai
nerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-12-12 22:04:01 +0000 UTC,FinishedAt:2023-12-12 22:04:01 +0000 UTC,ContainerID:cri-o://2a4d9b03d388d413f5798d9014c2be17e3038686446fd80927367e5b1f9b9ec9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://2a4d9b03d388d413f5798d9014c2be17e3038686446fd80927367e5b1f9b9ec9 Started:0xc00345eadc AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 22:04:03.236867   84229 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:03.238588   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.237109317s)
	I1212 22:04:03.238639   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:03.238648   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:03.239077   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:03.239085   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:03.239112   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:03.239131   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:03.239141   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:03.239518   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:03.239537   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:03.239556   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:03.395201   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.376308335s)
	I1212 22:04:03.395284   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:03.395299   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:03.395626   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:03.395679   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:03.395690   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:03.395707   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:03.395723   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:03.395985   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:03.395997   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:03.396027   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:04.826764   84229 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 22:04:04.826817   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:04:04.830224   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:04:04.830595   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:04:04.830628   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:04:04.830800   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:04:04.831011   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:04:04.831187   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:04:04.831342   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:04:05.060165   84229 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 22:04:05.099803   84229 addons.go:231] Setting addon gcp-auth=true in "addons-361656"
	I1212 22:04:05.099866   84229 host.go:66] Checking if "addons-361656" exists ...
	I1212 22:04:05.100190   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:04:05.100223   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:04:05.116136   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42901
	I1212 22:04:05.116619   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:04:05.117217   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:04:05.117239   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:04:05.117565   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:04:05.118187   84229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:04:05.118228   84229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:04:05.133071   84229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1212 22:04:05.133550   84229 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:04:05.134152   84229 main.go:141] libmachine: Using API Version  1
	I1212 22:04:05.134180   84229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:04:05.134552   84229 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:04:05.134753   84229 main.go:141] libmachine: (addons-361656) Calling .GetState
	I1212 22:04:05.136603   84229 main.go:141] libmachine: (addons-361656) Calling .DriverName
	I1212 22:04:05.136861   84229 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 22:04:05.136884   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHHostname
	I1212 22:04:05.140071   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:04:05.140570   84229 main.go:141] libmachine: (addons-361656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:1e:2f", ip: ""} in network mk-addons-361656: {Iface:virbr1 ExpiryTime:2023-12-12 23:03:17 +0000 UTC Type:0 Mac:52:54:00:9c:1e:2f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-361656 Clientid:01:52:54:00:9c:1e:2f}
	I1212 22:04:05.140605   84229 main.go:141] libmachine: (addons-361656) DBG | domain addons-361656 has defined IP address 192.168.39.86 and MAC address 52:54:00:9c:1e:2f in network mk-addons-361656
	I1212 22:04:05.140772   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHPort
	I1212 22:04:05.140988   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHKeyPath
	I1212 22:04:05.141220   84229 main.go:141] libmachine: (addons-361656) Calling .GetSSHUsername
	I1212 22:04:05.141397   84229 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/addons-361656/id_rsa Username:docker}
	I1212 22:04:05.529168   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:06.710182   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.663494552s)
	I1212 22:04:06.710226   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.647287273s)
	I1212 22:04:06.710265   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.710235   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.710281   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.710294   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.710302   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.605059347s)
	I1212 22:04:06.710329   84229 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.580500144s)
	I1212 22:04:06.710418   84229 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 22:04:06.710387   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.710445   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.710545   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.710578   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.710590   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.710604   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.710615   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.710694   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.710708   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.710718   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.710732   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.710745   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.710786   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.710801   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.710817   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.710826   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.712515   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.712517   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.712521   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.712542   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.712555   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.712558   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.712558   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.712567   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.712544   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.712578   84229 addons.go:467] Verifying addon ingress=true in "addons-361656"
	I1212 22:04:06.714463   84229 out.go:177] * Verifying ingress addon...
	I1212 22:04:06.712900   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.532287857s)
	I1212 22:04:06.712971   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.328378811s)
	I1212 22:04:06.713022   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.070497866s)
	I1212 22:04:06.713053   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.066027331s)
	I1212 22:04:06.713241   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.957567834s)
	I1212 22:04:06.713409   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.701752391s)
	I1212 22:04:06.713500   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.487848421s)
	I1212 22:04:06.715983   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.715998   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.716009   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.716016   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716022   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716037   84229 main.go:141] libmachine: Making call to close driver server
	W1212 22:04:06.716052   84229 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:04:06.716019   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.716076   84229 retry.go:31] will retry after 366.059997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:04:06.716058   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716012   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716088   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.715997   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.716144   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716487   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.716494   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.716533   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.716548   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.716555   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.716562   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.716574   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716590   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.716605   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.716619   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.716636   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716641   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.716651   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.716666   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.716678   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716535   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.716716   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.716727   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.716735   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.716953   84229 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 22:04:06.717082   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.717117   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.717126   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.717168   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.717196   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.717228   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.717240   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.717349   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.717368   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.717377   84229 addons.go:467] Verifying addon registry=true in "addons-361656"
	I1212 22:04:06.719314   84229 out.go:177] * Verifying registry addon...
	I1212 22:04:06.717589   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.717613   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.718336   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.718383   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.719356   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.719365   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.721643   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.721659   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.721871   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.721885   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.721895   84229 addons.go:467] Verifying addon metrics-server=true in "addons-361656"
	I1212 22:04:06.722416   84229 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 22:04:06.723487   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.723505   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.723520   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.723530   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.723539   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.723774   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.723791   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.734422   84229 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 22:04:06.734446   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:06.740582   84229 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 22:04:06.740612   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:06.750582   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.750608   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.750894   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.750912   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:06.750949   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	W1212 22:04:06.751028   84229 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1212 22:04:06.761730   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:06.763442   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:06.764061   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:06.764079   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:06.764371   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:06.764388   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:06.764404   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:07.083321   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:04:07.357894   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:07.366220   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:07.424764   84229 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.287873501s)
	I1212 22:04:07.424845   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.009194909s)
	I1212 22:04:07.426900   84229 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:04:07.424902   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:07.428485   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:07.430014   84229 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 22:04:07.428962   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:07.429068   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:07.431629   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:07.431648   84229 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 22:04:07.431663   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:07.431678   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:07.431664   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 22:04:07.431979   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:07.432000   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:07.432012   84229 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-361656"
	I1212 22:04:07.433690   84229 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 22:04:07.435907   84229 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 22:04:07.462786   84229 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 22:04:07.462820   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 22:04:07.495644   84229 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:04:07.495672   84229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 22:04:07.529996   84229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:04:07.778039   84229 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 22:04:07.778068   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:07.908532   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:07.908647   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:07.909007   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:07.924887   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:08.275984   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:08.277426   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:08.430910   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:08.809156   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:08.831455   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:08.946034   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:09.281281   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:09.283593   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:09.439876   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:09.673948   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.590548464s)
	I1212 22:04:09.674009   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:09.674021   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:09.674339   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:09.674359   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:09.674367   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:09.674379   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:09.674389   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:09.674673   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:09.674688   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:09.674712   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:09.771132   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:09.775703   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:09.954492   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:09.966909   84229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.436864652s)
	I1212 22:04:09.966974   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:09.967004   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:09.967343   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:09.967364   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:09.967380   84229 main.go:141] libmachine: Making call to close driver server
	I1212 22:04:09.967389   84229 main.go:141] libmachine: (addons-361656) Calling .Close
	I1212 22:04:09.967634   84229 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:04:09.967675   84229 main.go:141] libmachine: (addons-361656) DBG | Closing plugin on server side
	I1212 22:04:09.967683   84229 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:04:09.969536   84229 addons.go:467] Verifying addon gcp-auth=true in "addons-361656"
	I1212 22:04:09.971707   84229 out.go:177] * Verifying gcp-auth addon...
	I1212 22:04:09.974349   84229 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 22:04:10.041737   84229 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 22:04:10.041768   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:10.041849   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:10.082563   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:10.268045   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:10.270105   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:10.439403   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:10.586410   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:10.768747   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:10.770934   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:10.932218   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:11.088544   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:11.280906   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:11.288709   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:11.438465   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:11.588006   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:11.786810   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:11.788907   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:11.943091   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:12.087167   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:12.271221   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:12.273564   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:12.430654   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:12.463537   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:12.587880   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:12.766681   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:12.771017   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:12.930734   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:13.087233   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:13.272731   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:13.273218   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:13.431720   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:13.586600   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:13.776116   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:13.778899   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:13.939902   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:14.087928   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:14.271771   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:14.271972   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:14.434956   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:14.471536   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:14.587584   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:14.781161   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:14.781539   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:14.930415   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:15.089961   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:15.269406   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:15.276603   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:15.431155   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:15.587684   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:15.768913   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:15.774418   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:15.931625   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:16.090304   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:16.273018   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:16.274771   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:16.445391   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:16.586888   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:16.769217   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:16.774503   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:16.953443   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:16.977997   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:17.086349   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:17.277181   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:17.293778   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:17.458696   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:17.587874   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:17.769232   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:17.778700   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:17.939542   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:18.091738   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:18.272564   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:18.273154   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:18.439844   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:18.588680   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:18.792334   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:18.793783   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:18.931232   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:19.091805   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:19.594290   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:19.594701   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:19.598574   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:19.599078   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:19.605807   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:19.773037   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:19.778995   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:19.935400   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:20.087547   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:20.267477   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:20.269498   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:20.439982   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:20.598870   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:20.778356   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:20.779267   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:20.941533   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:21.095128   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:21.276924   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:21.278311   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:21.453135   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:21.595521   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:21.771616   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:21.775383   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:21.947075   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:21.975519   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:22.087784   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:22.275674   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:22.286800   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:22.443630   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:22.599160   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:22.773905   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:22.775457   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:22.934795   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:23.087798   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:23.268052   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:23.271193   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:23.432882   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:23.586386   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:23.766616   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:23.784523   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:23.932753   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:24.086412   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:24.267095   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:24.269168   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:24.433930   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:24.467061   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:24.589726   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:24.767816   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:24.769134   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:24.931497   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:25.087094   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:25.267316   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:25.273323   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:25.467709   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:25.587087   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:25.770670   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:25.772682   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:25.944600   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:26.087993   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:26.268516   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:26.272542   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:26.451874   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:26.467470   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:26.589531   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:26.770738   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:26.770776   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:26.934856   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:27.091325   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:27.267613   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:27.274516   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:27.431479   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:27.586889   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:27.767462   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:27.770463   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:27.939569   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:28.087267   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:28.266628   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:28.269286   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:28.433519   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:28.586645   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:28.772409   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:28.773020   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:28.933230   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:28.963547   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:29.087401   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:29.267881   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:29.268738   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:29.431837   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:29.587351   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:29.767435   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:29.769957   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:29.932018   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:30.086277   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:30.267031   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:30.271160   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:30.431724   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:30.587220   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:30.767268   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:30.769651   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:30.930602   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:31.086681   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:31.267401   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:31.268482   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:31.432726   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:31.462848   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:31.588141   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:31.767148   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:31.773626   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:31.930898   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:32.086723   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:32.269260   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:32.271611   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:32.431032   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:32.657105   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:32.767001   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:32.770815   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:32.931562   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:33.087250   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:33.267326   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:33.269703   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:33.431643   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:33.474257   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:33.840299   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:33.840603   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:33.842507   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:33.937149   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:34.088151   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:34.268068   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:34.272192   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:34.434145   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:34.591366   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:34.767104   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:34.774136   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:34.931986   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:35.099811   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:35.281407   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:35.285753   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:35.433996   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:35.476415   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:35.587220   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:35.767166   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:35.769451   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:35.939480   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:36.133897   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:36.269495   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:36.270446   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:36.431776   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:36.588465   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:36.769030   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:36.770118   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:36.932344   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:37.087857   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:37.267597   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:37.270500   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:37.432100   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:37.587390   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:37.776854   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:37.777586   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:38.378098   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:38.382926   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:38.383810   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:38.384503   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:38.384884   84229 pod_ready.go:102] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:38.431117   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:38.586521   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:38.770800   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:38.771132   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:38.936276   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:38.962751   84229 pod_ready.go:92] pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:38.962782   84229 pod_ready.go:81] duration metric: took 35.725904217s waiting for pod "coredns-5dd5756b68-wwrgq" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.962795   84229 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-361656" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.967556   84229 pod_ready.go:92] pod "etcd-addons-361656" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:38.967585   84229 pod_ready.go:81] duration metric: took 4.780576ms waiting for pod "etcd-addons-361656" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.967598   84229 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-361656" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.972809   84229 pod_ready.go:92] pod "kube-apiserver-addons-361656" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:38.972831   84229 pod_ready.go:81] duration metric: took 5.225442ms waiting for pod "kube-apiserver-addons-361656" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.972841   84229 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-361656" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.980001   84229 pod_ready.go:92] pod "kube-controller-manager-addons-361656" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:38.980032   84229 pod_ready.go:81] duration metric: took 7.182244ms waiting for pod "kube-controller-manager-addons-361656" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.980046   84229 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pjzv8" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.987555   84229 pod_ready.go:92] pod "kube-proxy-pjzv8" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:38.987584   84229 pod_ready.go:81] duration metric: took 7.528535ms waiting for pod "kube-proxy-pjzv8" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:38.987598   84229 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-361656" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:39.086921   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:39.267655   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:39.269243   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:39.381991   84229 pod_ready.go:92] pod "kube-scheduler-addons-361656" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:39.382021   84229 pod_ready.go:81] duration metric: took 394.413877ms waiting for pod "kube-scheduler-addons-361656" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:39.382032   84229 pod_ready.go:38] duration metric: took 42.014939523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:04:39.382050   84229 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:04:39.382113   84229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:04:39.419077   84229 api_server.go:72] duration metric: took 42.647290813s to wait for apiserver process to appear ...
	I1212 22:04:39.419107   84229 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:04:39.419128   84229 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I1212 22:04:39.424617   84229 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I1212 22:04:39.425824   84229 api_server.go:141] control plane version: v1.28.4
	I1212 22:04:39.425851   84229 api_server.go:131] duration metric: took 6.737484ms to wait for apiserver health ...
	I1212 22:04:39.425861   84229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:04:39.430845   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:39.589842   84229 system_pods.go:59] 18 kube-system pods found
	I1212 22:04:39.589874   84229 system_pods.go:61] "coredns-5dd5756b68-wwrgq" [00c258da-10ba-4e62-9ae4-0ec482dc23ea] Running
	I1212 22:04:39.589879   84229 system_pods.go:61] "csi-hostpath-attacher-0" [231b62bc-5fa9-4829-a7d6-9b5ff69477b3] Running
	I1212 22:04:39.589884   84229 system_pods.go:61] "csi-hostpath-resizer-0" [9f2742b7-a6d0-4ff3-91fa-da8fa8b5ae81] Running
	I1212 22:04:39.589892   84229 system_pods.go:61] "csi-hostpathplugin-pkwtj" [de488436-16ae-4032-960c-b9c45afa5a3d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:04:39.589898   84229 system_pods.go:61] "etcd-addons-361656" [2c1e3247-a676-42b4-a7bb-ce47066ee417] Running
	I1212 22:04:39.589903   84229 system_pods.go:61] "kube-apiserver-addons-361656" [59fbf8a3-c83f-4264-84dd-36a067699b28] Running
	I1212 22:04:39.589907   84229 system_pods.go:61] "kube-controller-manager-addons-361656" [58b58094-bf0f-448a-a6ff-3d0f0999c62a] Running
	I1212 22:04:39.589912   84229 system_pods.go:61] "kube-ingress-dns-minikube" [0df541c9-11fb-444a-9fd1-cf852f2d1bd4] Running
	I1212 22:04:39.589916   84229 system_pods.go:61] "kube-proxy-pjzv8" [8e145d1c-391a-4995-9f1d-d133d382adc4] Running
	I1212 22:04:39.589920   84229 system_pods.go:61] "kube-scheduler-addons-361656" [b03303ba-5d30-45af-a6d3-69646577c833] Running
	I1212 22:04:39.589925   84229 system_pods.go:61] "metrics-server-7c66d45ddc-xcc44" [1965630d-64fd-4589-8013-157e45b51da6] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 22:04:39.589932   84229 system_pods.go:61] "nvidia-device-plugin-daemonset-hp8gn" [66283d05-a203-4f57-9ab7-6e01fd05f9de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 22:04:39.589938   84229 system_pods.go:61] "registry-proxy-vmks8" [6b6af208-0ccf-4504-8c9c-7a50353cd4bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:04:39.589944   84229 system_pods.go:61] "registry-t5r8q" [b3f4351f-6afa-4e2b-9e8e-902b9da6d859] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 22:04:39.589950   84229 system_pods.go:61] "snapshot-controller-58dbcc7b99-8vvgr" [158dd21a-c76c-4715-b42b-3b8584ac25b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:04:39.589965   84229 system_pods.go:61] "snapshot-controller-58dbcc7b99-bj6nf" [53805fe1-a499-4ccd-b440-ffd51f141d44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:04:39.589970   84229 system_pods.go:61] "storage-provisioner" [7fed1501-c613-4f8b-adb0-b94e9c7c30d8] Running
	I1212 22:04:39.589978   84229 system_pods.go:61] "tiller-deploy-7b677967b9-7rrmd" [d8666a3c-6977-4eea-ad49-1d5235697f29] Running
	I1212 22:04:39.589985   84229 system_pods.go:74] duration metric: took 164.116976ms to wait for pod list to return data ...
	I1212 22:04:39.589995   84229 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:04:39.591352   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:39.771379   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:39.771988   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:39.782573   84229 default_sa.go:45] found service account: "default"
	I1212 22:04:39.782603   84229 default_sa.go:55] duration metric: took 192.601785ms for default service account to be created ...
	I1212 22:04:39.782613   84229 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:04:39.941218   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:39.992517   84229 system_pods.go:86] 18 kube-system pods found
	I1212 22:04:39.992550   84229 system_pods.go:89] "coredns-5dd5756b68-wwrgq" [00c258da-10ba-4e62-9ae4-0ec482dc23ea] Running
	I1212 22:04:39.992555   84229 system_pods.go:89] "csi-hostpath-attacher-0" [231b62bc-5fa9-4829-a7d6-9b5ff69477b3] Running
	I1212 22:04:39.992560   84229 system_pods.go:89] "csi-hostpath-resizer-0" [9f2742b7-a6d0-4ff3-91fa-da8fa8b5ae81] Running
	I1212 22:04:39.992567   84229 system_pods.go:89] "csi-hostpathplugin-pkwtj" [de488436-16ae-4032-960c-b9c45afa5a3d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:04:39.992571   84229 system_pods.go:89] "etcd-addons-361656" [2c1e3247-a676-42b4-a7bb-ce47066ee417] Running
	I1212 22:04:39.992577   84229 system_pods.go:89] "kube-apiserver-addons-361656" [59fbf8a3-c83f-4264-84dd-36a067699b28] Running
	I1212 22:04:39.992581   84229 system_pods.go:89] "kube-controller-manager-addons-361656" [58b58094-bf0f-448a-a6ff-3d0f0999c62a] Running
	I1212 22:04:39.992585   84229 system_pods.go:89] "kube-ingress-dns-minikube" [0df541c9-11fb-444a-9fd1-cf852f2d1bd4] Running
	I1212 22:04:39.992590   84229 system_pods.go:89] "kube-proxy-pjzv8" [8e145d1c-391a-4995-9f1d-d133d382adc4] Running
	I1212 22:04:39.992594   84229 system_pods.go:89] "kube-scheduler-addons-361656" [b03303ba-5d30-45af-a6d3-69646577c833] Running
	I1212 22:04:39.992599   84229 system_pods.go:89] "metrics-server-7c66d45ddc-xcc44" [1965630d-64fd-4589-8013-157e45b51da6] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 22:04:39.992605   84229 system_pods.go:89] "nvidia-device-plugin-daemonset-hp8gn" [66283d05-a203-4f57-9ab7-6e01fd05f9de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 22:04:39.992618   84229 system_pods.go:89] "registry-proxy-vmks8" [6b6af208-0ccf-4504-8c9c-7a50353cd4bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:04:39.992626   84229 system_pods.go:89] "registry-t5r8q" [b3f4351f-6afa-4e2b-9e8e-902b9da6d859] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 22:04:39.992637   84229 system_pods.go:89] "snapshot-controller-58dbcc7b99-8vvgr" [158dd21a-c76c-4715-b42b-3b8584ac25b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:04:39.992646   84229 system_pods.go:89] "snapshot-controller-58dbcc7b99-bj6nf" [53805fe1-a499-4ccd-b440-ffd51f141d44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:04:39.992654   84229 system_pods.go:89] "storage-provisioner" [7fed1501-c613-4f8b-adb0-b94e9c7c30d8] Running
	I1212 22:04:39.992661   84229 system_pods.go:89] "tiller-deploy-7b677967b9-7rrmd" [d8666a3c-6977-4eea-ad49-1d5235697f29] Running
	I1212 22:04:39.992670   84229 system_pods.go:126] duration metric: took 210.051ms to wait for k8s-apps to be running ...
	I1212 22:04:39.992686   84229 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:04:39.992732   84229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:04:40.013293   84229 system_svc.go:56] duration metric: took 20.588825ms WaitForService to wait for kubelet.
	I1212 22:04:40.013328   84229 kubeadm.go:581] duration metric: took 43.241552208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:04:40.013354   84229 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:04:40.087777   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:40.182181   84229 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:04:40.182242   84229 node_conditions.go:123] node cpu capacity is 2
	I1212 22:04:40.182256   84229 node_conditions.go:105] duration metric: took 168.896907ms to run NodePressure ...
	I1212 22:04:40.182268   84229 start.go:228] waiting for startup goroutines ...
	I1212 22:04:40.266883   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:40.269526   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:40.431828   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:40.591093   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:40.766823   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:40.769201   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:40.932145   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:41.089015   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:41.266716   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:41.271867   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:41.430923   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:41.587521   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:41.769208   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:41.770427   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:41.934990   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:42.087648   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:42.268635   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:42.269032   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:42.432284   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:42.590341   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:42.767498   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:42.770146   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:42.932240   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:43.087380   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:43.269274   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:43.269417   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:43.431450   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:43.586452   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:43.799656   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:43.808550   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:43.941861   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:44.086878   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:44.273991   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:44.274195   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:44.455674   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:44.586462   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:44.766930   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:44.769027   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:44.941731   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:45.086663   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:45.270052   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:45.271449   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:45.431840   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:45.989409   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:45.989413   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:45.989541   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:45.992743   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:46.090036   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:46.266571   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:46.269106   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:46.431875   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:46.586715   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:46.767187   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:46.770732   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:46.931153   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:47.088263   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:47.267019   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:47.269646   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:47.431501   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:47.587975   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:47.766527   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:47.770462   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:47.931492   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:48.088145   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:48.269209   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:48.269470   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:48.431284   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:48.594425   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:48.769774   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:48.770108   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:48.931625   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:49.087389   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:49.268150   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:49.268260   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:49.431001   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:49.586684   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:49.767588   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:49.769295   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:49.935503   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:50.086499   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:50.270785   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:50.271103   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:50.431294   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:50.587259   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:50.767230   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:50.768988   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:50.933997   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:51.087130   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:51.271565   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:51.276758   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:51.431181   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:51.586568   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:51.767522   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:51.768660   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:51.941716   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:52.086824   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:52.268665   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:52.270089   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:52.689049   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:52.691555   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:52.766773   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:52.770614   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:52.932446   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:53.086847   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:53.267776   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:53.269255   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:53.442565   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:53.587712   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:53.771152   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:53.773620   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:53.932639   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:54.086664   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:54.267201   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:54.269730   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:54.433978   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:54.587026   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:54.766403   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:54.769661   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:54.931221   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:55.086844   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:55.268380   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:55.270298   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:55.431505   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:55.586698   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:55.767757   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:55.768301   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:55.932884   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:56.087435   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:56.267482   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:56.271023   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:56.432427   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:56.587584   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:56.772691   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:56.772837   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:56.932887   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:57.089462   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:57.266949   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:57.269974   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:57.435036   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:57.587669   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:57.767184   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:57.769563   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:57.932056   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:58.087198   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:58.278698   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:58.279055   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:58.430906   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:58.586685   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:58.769391   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:58.775553   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:58.932727   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:59.086898   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:59.268962   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:59.270096   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:59.432199   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:59.589305   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:59.767233   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:59.770443   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:59.931997   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:00.087301   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:00.267578   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:00.270614   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:00.482695   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:00.586372   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:00.770975   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:00.773503   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:00.931904   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:01.086914   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:01.267881   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:01.268891   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:01.430722   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:01.586238   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:01.782716   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:01.784418   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:01.935058   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:02.091515   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:02.269331   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:02.269457   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:02.431651   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:02.586483   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:02.773988   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:02.776544   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:02.943962   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:03.086768   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:03.268357   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:03.268600   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:03.432148   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:03.586489   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:03.767712   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:03.769362   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:03.935609   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:04.086168   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:04.267450   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:04.270215   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:04.433292   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:04.598754   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:04.777321   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:04.778077   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:05:04.936834   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:05.093432   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:05.269370   84229 kapi.go:107] duration metric: took 58.546950746s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 22:05:05.269706   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:05.430501   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:05.586359   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:05.767046   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:05.930685   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:06.104108   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:06.268038   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:06.430560   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:06.588022   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:06.770077   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:06.953428   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:07.086240   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:07.268757   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:07.619799   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:07.653049   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:07.795157   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:07.938654   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:08.088825   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:08.267076   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:08.431980   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:08.590227   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:08.807972   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:08.934416   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:09.087327   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:09.266269   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:09.431311   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:09.624670   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:09.768676   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:09.934434   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:10.088186   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:10.267064   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:10.434486   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:10.588563   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:10.767986   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:10.933475   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:11.092755   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:11.268837   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:11.432320   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:11.589214   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:11.768198   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:11.931191   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:12.088550   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:12.267270   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:12.431538   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:12.586684   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:12.767059   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:12.931232   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:13.087571   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:13.269560   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:13.511189   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:13.589455   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:13.767602   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:13.932238   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:14.086967   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:14.267061   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:14.432130   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:14.588718   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:14.766829   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:14.930594   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:15.086972   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:15.266354   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:15.431525   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:15.587311   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:15.767032   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:15.930766   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:16.086799   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:16.267423   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:16.431270   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:16.587710   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:16.766442   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:16.932060   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:17.087258   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:17.270763   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:17.431134   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:17.587509   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:17.779368   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:17.932469   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:18.086566   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:18.267235   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:18.431996   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:18.587204   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:18.767128   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:18.931410   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:19.086256   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:19.267561   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:19.432799   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:19.600108   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:19.767564   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:19.936108   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:20.087823   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:20.274994   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:20.431530   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:20.591461   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:20.766938   84229 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:05:20.951566   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:21.087777   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:21.268624   84229 kapi.go:107] duration metric: took 1m14.55166841s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 22:05:21.433195   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:21.587410   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:21.946900   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:22.087258   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:22.632529   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:22.633117   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:22.932066   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:23.096904   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:23.436650   84229 kapi.go:107] duration metric: took 1m16.000737889s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 22:05:23.586656   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:24.087010   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:24.587827   84229 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:05:25.090865   84229 kapi.go:107] duration metric: took 1m15.116511047s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 22:05:25.092722   84229 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-361656 cluster.
	I1212 22:05:25.094503   84229 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 22:05:25.096010   84229 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 22:05:25.097874   84229 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, helm-tiller, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1212 22:05:25.099433   84229 addons.go:502] enable addons completed in 1m28.444407541s: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns inspektor-gadget metrics-server helm-tiller default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1212 22:05:25.099479   84229 start.go:233] waiting for cluster config update ...
	I1212 22:05:25.099501   84229 start.go:242] writing updated cluster config ...
	I1212 22:05:25.099813   84229 ssh_runner.go:195] Run: rm -f paused
	I1212 22:05:25.157136   84229 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 22:05:25.159213   84229 out.go:177] * Done! kubectl is now configured to use "addons-361656" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 22:03:14 UTC, ends at Tue 2023-12-12 22:08:04 UTC. --
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.825700037Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=be0b4e65-5ecc-4bfb-89b4-c48984684508 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.827402663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a2dd2903-a667-4972-bae8-91c61f47023b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.828956520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702418884828937216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=a2dd2903-a667-4972-bae8-91c61f47023b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.829961103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b0acaf4b-6fb4-4b61-9d4d-90d1f09f01ca name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.830051379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b0acaf4b-6fb4-4b61-9d4d-90d1f09f01ca name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.830487391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8a3cb2b14e2fd79d7dafa0ab819a82caec3d58e3e85f9e47d7bdc4857b87615,PodSandboxId:11126c04826ceb9a433b50f982be4a2bec9863a8a4dbd013c9f12c59566dac37,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702418876330831043,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-99x22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5d8c21c-f196-454c-b323-28842ec5c5b8,},Annotations:map[string]string{io.kubernetes.container.hash: cddf2ed2,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855635ac50884de1beb470732a2c911ff377aeebc88085191adef9594909a13f,PodSandboxId:867a762363d97bdf26df37b5ac3d199c4c665e3ed5b0133a332288d2a65f2f47,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702418752080440373,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-wvjmv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 80a749a9-3e48-4683-a511-fa220661bdc4,},An
notations:map[string]string{io.kubernetes.container.hash: a2b7dcb6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1552202dfef028710685a2827769c363f3d6c8ba6341e48133e1d41d1dc2bbb,PodSandboxId:8ffd8386b6c0454f2a8dab00e5eaf45381df3665006cf580653fc459671cd6f0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702418736319930574,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 5b06d6d2-998c-4fa5-b223-3add010e8e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d9152c3e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c228d72c6887e427e4b02bcdadbb57a5a4f6ba35695daa5ad74c544d1deb91,PodSandboxId:2ceb5cba92a483d2405e8bcaa9c9cf5afc393cb33a66d4c96a4f774df12c7a89,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702418724628373953,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-9qbm2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f980d6db-f2b0-40e8-a4ab-a8f9a8746c2c,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ef1fc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c626226dd367436ed15e97ef6fb405cacb39d7b199d71e75e414222202006c,PodSandboxId:579d1fdad5ef5b0f5931f6a423ea6114436db008de23d7aae5def900a5055953,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17024187
02791044868,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9r5ds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 533d35a4-b8a6-4273-8ca1-b4126c10ec65,},Annotations:map[string]string{io.kubernetes.container.hash: 70ad9c71,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041d4cac61ba979536ec65a3a6199ea430cc38676e6dedd99e5f625fe87e95dd,PodSandboxId:ea58cc5be71f0621444b314e89d0e2adbc57e047ecc16409915b2c3e28fe422d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702418688029925294,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pts6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5f162bf0-b36e-4d15-bf62-0dbadd581cd8,},Annotations:map[string]string{io.kubernetes.container.hash: ce9bca44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef04e95ad1fe933b031cad975c37a81891ea763c78a0635e4ed9e5a08545d83,PodSandboxId:fac761f91352a61431ccf85bcb6b64d7fe27631568accb2971a821fd80a08a8b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner
@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1702418679799337899,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-qccjq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c22d109-bee3-43f0-af69-51c2db7ebc0b,},Annotations:map[string]string{io.kubernetes.container.hash: d139ac8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb98f8d7789efc274b1cf924697cb141142e383644b53888a17ba15c569025,PodSandboxId:e0d58ff32db7cedbfc823cb05940833bc6692c8a98b08c77d69165ffc769df31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702418659624390243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fed1501-c613-4f8b-adb0-b94e9c7c30d8,},Annotations:map[string]string{io.kubernetes.container.hash: c639785a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d919eb6810b21cbb9196d13258c29f88172688f2a3fc47a26646d954fe44437,PodSandboxId:ee2881903ffaa04fd4b2dbda90ca8ba7f4cadeb1b9402775cb059db257ea329f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube
-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702418652795496165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjzv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e145d1c-391a-4995-9f1d-d133d382adc4,},Annotations:map[string]string{io.kubernetes.container.hash: e0d07811,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20f0abe6fb84210558e8488bcd0c3d00bc1a70c4ee1a58c3509d87d431f7e88,PodSandboxId:00b8b393b274b3191069b82176fee3eea8f5fc9fc6cfb7895b7e8de0d9b908de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044a
eb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702418642317442308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wwrgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00c258da-10ba-4e62-9ae4-0ec482dc23ea,},Annotations:map[string]string{io.kubernetes.container.hash: d3b7a77a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53bacbebf69a069a9e72b6111c84f56ce650ba89aeb0470ce89711fc7f26274e,PodSandboxId:65a204f35462c9e03b4bd71d04a14264802ff79a4fbef69308f49855e69221b0,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702418617835782518,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d32cb5b6e84fc769121d58d6fc32c246,},Annotations:map[string]string{io.kubernetes.container.hash: b47b8a21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e321afdf117e7d05e267871cf0f1a175739a242ca3e0b4ff06876917f8d35d44,PodSandboxId:1fbfb04e9bf39d63cc151d9498b201f81efcce516aeb4a200fa6f61237bc9ea1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d
4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702418617617557832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee24e51ef69edeea560f95940655be4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a13315939f9f84a673ce03e6ee0001b426e72d67e06f33b3ea07a6b196f3292,PodSandboxId:5193fa5ec2c64d8b6a6f047eb72466cb3fce272b153bca331293d49f53784304,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e
7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702418617042580820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95206490b18d61bd7c5803d7aafb2e07,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a80dbf9b6a168ae47ecd32107e17d0fe6dbefcb1ca495a7f868935f3244fa,PodSandboxId:abe71fe755873f9471bb01f8c2b737b4dd8541887031d0ec4a58b5023274b491,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37d
b33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702418617020007277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc3200671f47f1ba478f0e65eded0e8,},Annotations:map[string]string{io.kubernetes.container.hash: 10631f5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b0acaf4b-6fb4-4b61-9d4d-90d1f09f01ca name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.836665319Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=dcd2aff1-e484-44a2-8415-f703a9affa48 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.837034456Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:11126c04826ceb9a433b50f982be4a2bec9863a8a4dbd013c9f12c59566dac37,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-99x22,Uid:c5d8c21c-f196-454c-b323-28842ec5c5b8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418874331594968,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-99x22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5d8c21c-f196-454c-b323-28842ec5c5b8,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:07:53.992688886Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:867a762363d97bdf26df37b5ac3d199c4c665e3ed5b0133a332288d2a65f2f47,Metadata:&PodSandboxMetadata{Name:headlamp-777fd4b855-wvjmv,Uid:80a749a9-3e48-4683-a511-fa220661bdc4,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418745574194325,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-777fd4b855-wvjmv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 80a749a9-3e48-4683-a511-fa220661bdc4,pod-template-hash: 777fd4b855,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:05:45.222088326Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ffd8386b6c0454f2a8dab00e5eaf45381df3665006cf580653fc459671cd6f0,Metadata:&PodSandboxMetadata{Name:nginx,Uid:5b06d6d2-998c-4fa5-b223-3add010e8e2f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418732955212064,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5b06d6d2-998c-4fa5-b223-3add010e8e2f,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-
12-12T22:05:32.416157208Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ceb5cba92a483d2405e8bcaa9c9cf5afc393cb33a66d4c96a4f774df12c7a89,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-9qbm2,Uid:f980d6db-f2b0-40e8-a4ab-a8f9a8746c2c,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418713960736254,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-9qbm2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f980d6db-f2b0-40e8-a4ab-a8f9a8746c2c,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:04:09.976301997Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b8951d0f3e2eb966afdcfe2369b0a68e390221505c24b90c1c0c773253c8ef6,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7c6974c4d8-b8x52,Uid:6e4a747a-0126-4ff4-9292-5071b16cb10d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTRE
ADY,CreatedAt:1702418710845577173,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7c6974c4d8-b8x52,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6e4a747a-0126-4ff4-9292-5071b16cb10d,pod-template-hash: 7c6974c4d8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:04:06.595983604Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:579d1fdad5ef5b0f5931f6a423ea6114436db008de23d7aae5def900a5055953,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-9r5ds,Uid:533d35a4-b8a6-4273-8ca1-b4126c10ec65,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1702418647067087314,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kube
rnetes.io/controller-uid: dded50ca-08f4-452e-af51-354de478f32e,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: dded50ca-08f4-452e-af51-354de478f32e,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-9r5ds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 533d35a4-b8a6-4273-8ca1-b4126c10ec65,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:04:06.635311438Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ea58cc5be71f0621444b314e89d0e2adbc57e047ecc16409915b2c3e28fe422d,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-pts6h,Uid:5f162bf0-b36e-4d15-bf62-0dbadd581cd8,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1702418647058211978,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid:
de21aa5b-b872-4d83-b387-e43373418fe0,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: de21aa5b-b872-4d83-b387-e43373418fe0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-pts6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5f162bf0-b36e-4d15-bf62-0dbadd581cd8,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:04:06.624386819Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fac761f91352a61431ccf85bcb6b64d7fe27631568accb2971a821fd80a08a8b,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-78b46b4d5c-qccjq,Uid:9c22d109-bee3-43f0-af69-51c2db7ebc0b,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418645901974481,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-qccjq,io.kubernetes.pod.namespace: local-path-storage,io.ku
bernetes.pod.uid: 9c22d109-bee3-43f0-af69-51c2db7ebc0b,pod-template-hash: 78b46b4d5c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:04:05.254485578Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0d58ff32db7cedbfc823cb05940833bc6692c8a98b08c77d69165ffc769df31,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7fed1501-c613-4f8b-adb0-b94e9c7c30d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418645862780616,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fed1501-c613-4f8b-adb0-b94e9c7c30d8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-prov
isioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-12T22:04:05.222986432Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:11be7f20d7075bddce74ddc365e7b7a81552f303b9bf8c6e2d7ff300f911e174,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:0df541c9-11fb-444a-9fd1-cf852f2d1bd4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1702418644931311949,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-
ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0df541c9-11fb-444a-9fd1-cf852f2d1bd4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-12-12T22:04:03.963180840Z,kubernetes.io/config.source: a
pi,},RuntimeHandler:,},&PodSandbox{Id:00b8b393b274b3191069b82176fee3eea8f5fc9fc6cfb7895b7e8de0d9b908de,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wwrgq,Uid:00c258da-10ba-4e62-9ae4-0ec482dc23ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418638755031470,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wwrgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00c258da-10ba-4e62-9ae4-0ec482dc23ea,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:03:58.117368143Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee2881903ffaa04fd4b2dbda90ca8ba7f4cadeb1b9402775cb059db257ea329f,Metadata:&PodSandboxMetadata{Name:kube-proxy-pjzv8,Uid:8e145d1c-391a-4995-9f1d-d133d382adc4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418638194761630,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.conta
iner.name: POD,io.kubernetes.pod.name: kube-proxy-pjzv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e145d1c-391a-4995-9f1d-d133d382adc4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T22:03:57.263649524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abe71fe755873f9471bb01f8c2b737b4dd8541887031d0ec4a58b5023274b491,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-361656,Uid:afc3200671f47f1ba478f0e65eded0e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418616597585864,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc3200671f47f1ba478f0e65eded0e8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.86:8443,kubernetes.io/config.hash: afc3200671f47f1
ba478f0e65eded0e8,kubernetes.io/config.seen: 2023-12-12T22:03:36.030714813Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1fbfb04e9bf39d63cc151d9498b201f81efcce516aeb4a200fa6f61237bc9ea1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-361656,Uid:6ee24e51ef69edeea560f95940655be4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418616577194272,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee24e51ef69edeea560f95940655be4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6ee24e51ef69edeea560f95940655be4,kubernetes.io/config.seen: 2023-12-12T22:03:36.030717488Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:65a204f35462c9e03b4bd71d04a14264802ff79a4fbef69308f49855e69221b0,Metadata:&PodSandboxMetadata{Name:etcd-addons-361656,Uid:d32cb5b6e84fc769121d58d6fc32c246,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418616564598104,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d32cb5b6e84fc769121d58d6fc32c246,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.86:2379,kubernetes.io/config.hash: d32cb5b6e84fc769121d58d6fc32c246,kubernetes.io/config.seen: 2023-12-12T22:03:36.030711710Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5193fa5ec2c64d8b6a6f047eb72466cb3fce272b153bca331293d49f53784304,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-361656,Uid:95206490b18d61bd7c5803d7aafb2e07,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702418616554510084,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-361
656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95206490b18d61bd7c5803d7aafb2e07,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 95206490b18d61bd7c5803d7aafb2e07,kubernetes.io/config.seen: 2023-12-12T22:03:36.030716748Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=dcd2aff1-e484-44a2-8415-f703a9affa48 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.838365114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f8f6393-7309-4d21-96c1-f5755120321f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.838788340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f8f6393-7309-4d21-96c1-f5755120321f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.839153125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8a3cb2b14e2fd79d7dafa0ab819a82caec3d58e3e85f9e47d7bdc4857b87615,PodSandboxId:11126c04826ceb9a433b50f982be4a2bec9863a8a4dbd013c9f12c59566dac37,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702418876330831043,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-99x22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5d8c21c-f196-454c-b323-28842ec5c5b8,},Annotations:map[string]string{io.kubernetes.container.hash: cddf2ed2,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855635ac50884de1beb470732a2c911ff377aeebc88085191adef9594909a13f,PodSandboxId:867a762363d97bdf26df37b5ac3d199c4c665e3ed5b0133a332288d2a65f2f47,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702418752080440373,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-wvjmv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 80a749a9-3e48-4683-a511-fa220661bdc4,},An
notations:map[string]string{io.kubernetes.container.hash: a2b7dcb6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1552202dfef028710685a2827769c363f3d6c8ba6341e48133e1d41d1dc2bbb,PodSandboxId:8ffd8386b6c0454f2a8dab00e5eaf45381df3665006cf580653fc459671cd6f0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702418736319930574,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 5b06d6d2-998c-4fa5-b223-3add010e8e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d9152c3e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c228d72c6887e427e4b02bcdadbb57a5a4f6ba35695daa5ad74c544d1deb91,PodSandboxId:2ceb5cba92a483d2405e8bcaa9c9cf5afc393cb33a66d4c96a4f774df12c7a89,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702418724628373953,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-9qbm2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f980d6db-f2b0-40e8-a4ab-a8f9a8746c2c,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ef1fc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c626226dd367436ed15e97ef6fb405cacb39d7b199d71e75e414222202006c,PodSandboxId:579d1fdad5ef5b0f5931f6a423ea6114436db008de23d7aae5def900a5055953,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17024187
02791044868,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9r5ds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 533d35a4-b8a6-4273-8ca1-b4126c10ec65,},Annotations:map[string]string{io.kubernetes.container.hash: 70ad9c71,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041d4cac61ba979536ec65a3a6199ea430cc38676e6dedd99e5f625fe87e95dd,PodSandboxId:ea58cc5be71f0621444b314e89d0e2adbc57e047ecc16409915b2c3e28fe422d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702418688029925294,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pts6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5f162bf0-b36e-4d15-bf62-0dbadd581cd8,},Annotations:map[string]string{io.kubernetes.container.hash: ce9bca44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef04e95ad1fe933b031cad975c37a81891ea763c78a0635e4ed9e5a08545d83,PodSandboxId:fac761f91352a61431ccf85bcb6b64d7fe27631568accb2971a821fd80a08a8b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner
@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1702418679799337899,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-qccjq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c22d109-bee3-43f0-af69-51c2db7ebc0b,},Annotations:map[string]string{io.kubernetes.container.hash: d139ac8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb98f8d7789efc274b1cf924697cb141142e383644b53888a17ba15c569025,PodSandboxId:e0d58ff32db7cedbfc823cb05940833bc6692c8a98b08c77d69165ffc769df31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702418659624390243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fed1501-c613-4f8b-adb0-b94e9c7c30d8,},Annotations:map[string]string{io.kubernetes.container.hash: c639785a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d919eb6810b21cbb9196d13258c29f88172688f2a3fc47a26646d954fe44437,PodSandboxId:ee2881903ffaa04fd4b2dbda90ca8ba7f4cadeb1b9402775cb059db257ea329f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube
-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702418652795496165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjzv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e145d1c-391a-4995-9f1d-d133d382adc4,},Annotations:map[string]string{io.kubernetes.container.hash: e0d07811,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20f0abe6fb84210558e8488bcd0c3d00bc1a70c4ee1a58c3509d87d431f7e88,PodSandboxId:00b8b393b274b3191069b82176fee3eea8f5fc9fc6cfb7895b7e8de0d9b908de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044a
eb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702418642317442308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wwrgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00c258da-10ba-4e62-9ae4-0ec482dc23ea,},Annotations:map[string]string{io.kubernetes.container.hash: d3b7a77a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53bacbebf69a069a9e72b6111c84f56ce650ba89aeb0470ce89711fc7f26274e,PodSandboxId:65a204f35462c9e03b4bd71d04a14264802ff79a4fbef69308f49855e69221b0,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702418617835782518,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d32cb5b6e84fc769121d58d6fc32c246,},Annotations:map[string]string{io.kubernetes.container.hash: b47b8a21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e321afdf117e7d05e267871cf0f1a175739a242ca3e0b4ff06876917f8d35d44,PodSandboxId:1fbfb04e9bf39d63cc151d9498b201f81efcce516aeb4a200fa6f61237bc9ea1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d
4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702418617617557832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee24e51ef69edeea560f95940655be4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a13315939f9f84a673ce03e6ee0001b426e72d67e06f33b3ea07a6b196f3292,PodSandboxId:5193fa5ec2c64d8b6a6f047eb72466cb3fce272b153bca331293d49f53784304,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e
7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702418617042580820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95206490b18d61bd7c5803d7aafb2e07,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a80dbf9b6a168ae47ecd32107e17d0fe6dbefcb1ca495a7f868935f3244fa,PodSandboxId:abe71fe755873f9471bb01f8c2b737b4dd8541887031d0ec4a58b5023274b491,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37d
b33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702418617020007277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc3200671f47f1ba478f0e65eded0e8,},Annotations:map[string]string{io.kubernetes.container.hash: 10631f5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f8f6393-7309-4d21-96c1-f5755120321f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.876186731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=673c65ad-2425-46e4-b742-137a5edbaf93 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.876305303Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=673c65ad-2425-46e4-b742-137a5edbaf93 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.878089559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3fee053a-9bab-4856-84a5-cf5ceb731abc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.879487952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702418884879469175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=3fee053a-9bab-4856-84a5-cf5ceb731abc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.880105649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1d382a65-cb2d-4d64-8bd0-bd278cd993ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.880164617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1d382a65-cb2d-4d64-8bd0-bd278cd993ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.880569984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8a3cb2b14e2fd79d7dafa0ab819a82caec3d58e3e85f9e47d7bdc4857b87615,PodSandboxId:11126c04826ceb9a433b50f982be4a2bec9863a8a4dbd013c9f12c59566dac37,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702418876330831043,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-99x22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5d8c21c-f196-454c-b323-28842ec5c5b8,},Annotations:map[string]string{io.kubernetes.container.hash: cddf2ed2,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855635ac50884de1beb470732a2c911ff377aeebc88085191adef9594909a13f,PodSandboxId:867a762363d97bdf26df37b5ac3d199c4c665e3ed5b0133a332288d2a65f2f47,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702418752080440373,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-wvjmv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 80a749a9-3e48-4683-a511-fa220661bdc4,},An
notations:map[string]string{io.kubernetes.container.hash: a2b7dcb6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1552202dfef028710685a2827769c363f3d6c8ba6341e48133e1d41d1dc2bbb,PodSandboxId:8ffd8386b6c0454f2a8dab00e5eaf45381df3665006cf580653fc459671cd6f0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702418736319930574,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 5b06d6d2-998c-4fa5-b223-3add010e8e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d9152c3e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c228d72c6887e427e4b02bcdadbb57a5a4f6ba35695daa5ad74c544d1deb91,PodSandboxId:2ceb5cba92a483d2405e8bcaa9c9cf5afc393cb33a66d4c96a4f774df12c7a89,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702418724628373953,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-9qbm2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f980d6db-f2b0-40e8-a4ab-a8f9a8746c2c,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ef1fc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c626226dd367436ed15e97ef6fb405cacb39d7b199d71e75e414222202006c,PodSandboxId:579d1fdad5ef5b0f5931f6a423ea6114436db008de23d7aae5def900a5055953,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17024187
02791044868,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9r5ds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 533d35a4-b8a6-4273-8ca1-b4126c10ec65,},Annotations:map[string]string{io.kubernetes.container.hash: 70ad9c71,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041d4cac61ba979536ec65a3a6199ea430cc38676e6dedd99e5f625fe87e95dd,PodSandboxId:ea58cc5be71f0621444b314e89d0e2adbc57e047ecc16409915b2c3e28fe422d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702418688029925294,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pts6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5f162bf0-b36e-4d15-bf62-0dbadd581cd8,},Annotations:map[string]string{io.kubernetes.container.hash: ce9bca44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef04e95ad1fe933b031cad975c37a81891ea763c78a0635e4ed9e5a08545d83,PodSandboxId:fac761f91352a61431ccf85bcb6b64d7fe27631568accb2971a821fd80a08a8b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner
@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1702418679799337899,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-qccjq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c22d109-bee3-43f0-af69-51c2db7ebc0b,},Annotations:map[string]string{io.kubernetes.container.hash: d139ac8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb98f8d7789efc274b1cf924697cb141142e383644b53888a17ba15c569025,PodSandboxId:e0d58ff32db7cedbfc823cb05940833bc6692c8a98b08c77d69165ffc769df31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702418659624390243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fed1501-c613-4f8b-adb0-b94e9c7c30d8,},Annotations:map[string]string{io.kubernetes.container.hash: c639785a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d919eb6810b21cbb9196d13258c29f88172688f2a3fc47a26646d954fe44437,PodSandboxId:ee2881903ffaa04fd4b2dbda90ca8ba7f4cadeb1b9402775cb059db257ea329f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube
-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702418652795496165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjzv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e145d1c-391a-4995-9f1d-d133d382adc4,},Annotations:map[string]string{io.kubernetes.container.hash: e0d07811,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20f0abe6fb84210558e8488bcd0c3d00bc1a70c4ee1a58c3509d87d431f7e88,PodSandboxId:00b8b393b274b3191069b82176fee3eea8f5fc9fc6cfb7895b7e8de0d9b908de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044a
eb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702418642317442308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wwrgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00c258da-10ba-4e62-9ae4-0ec482dc23ea,},Annotations:map[string]string{io.kubernetes.container.hash: d3b7a77a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53bacbebf69a069a9e72b6111c84f56ce650ba89aeb0470ce89711fc7f26274e,PodSandboxId:65a204f35462c9e03b4bd71d04a14264802ff79a4fbef69308f49855e69221b0,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702418617835782518,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d32cb5b6e84fc769121d58d6fc32c246,},Annotations:map[string]string{io.kubernetes.container.hash: b47b8a21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e321afdf117e7d05e267871cf0f1a175739a242ca3e0b4ff06876917f8d35d44,PodSandboxId:1fbfb04e9bf39d63cc151d9498b201f81efcce516aeb4a200fa6f61237bc9ea1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d
4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702418617617557832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee24e51ef69edeea560f95940655be4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a13315939f9f84a673ce03e6ee0001b426e72d67e06f33b3ea07a6b196f3292,PodSandboxId:5193fa5ec2c64d8b6a6f047eb72466cb3fce272b153bca331293d49f53784304,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e
7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702418617042580820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95206490b18d61bd7c5803d7aafb2e07,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a80dbf9b6a168ae47ecd32107e17d0fe6dbefcb1ca495a7f868935f3244fa,PodSandboxId:abe71fe755873f9471bb01f8c2b737b4dd8541887031d0ec4a58b5023274b491,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37d
b33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702418617020007277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc3200671f47f1ba478f0e65eded0e8,},Annotations:map[string]string{io.kubernetes.container.hash: 10631f5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1d382a65-cb2d-4d64-8bd0-bd278cd993ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.928052151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=939e2c62-fd72-46b7-b78f-fed22e265c20 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.928120131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=939e2c62-fd72-46b7-b78f-fed22e265c20 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.929985937Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8221a9e2-9f74-4c29-8576-34accd698e79 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.931223859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702418884931202951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543773,},InodesUsed:&UInt64Value{Value:227,},},},}" file="go-grpc-middleware/chain.go:25" id=8221a9e2-9f74-4c29-8576-34accd698e79 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.931977487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fe95486f-b71a-4535-a552-48e1dca55bea name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.932033172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fe95486f-b71a-4535-a552-48e1dca55bea name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:08:04 addons-361656 crio[715]: time="2023-12-12 22:08:04.932425368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8a3cb2b14e2fd79d7dafa0ab819a82caec3d58e3e85f9e47d7bdc4857b87615,PodSandboxId:11126c04826ceb9a433b50f982be4a2bec9863a8a4dbd013c9f12c59566dac37,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702418876330831043,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-99x22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5d8c21c-f196-454c-b323-28842ec5c5b8,},Annotations:map[string]string{io.kubernetes.container.hash: cddf2ed2,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855635ac50884de1beb470732a2c911ff377aeebc88085191adef9594909a13f,PodSandboxId:867a762363d97bdf26df37b5ac3d199c4c665e3ed5b0133a332288d2a65f2f47,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1702418752080440373,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-wvjmv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 80a749a9-3e48-4683-a511-fa220661bdc4,},An
notations:map[string]string{io.kubernetes.container.hash: a2b7dcb6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1552202dfef028710685a2827769c363f3d6c8ba6341e48133e1d41d1dc2bbb,PodSandboxId:8ffd8386b6c0454f2a8dab00e5eaf45381df3665006cf580653fc459671cd6f0,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702418736319930574,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 5b06d6d2-998c-4fa5-b223-3add010e8e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d9152c3e,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c228d72c6887e427e4b02bcdadbb57a5a4f6ba35695daa5ad74c544d1deb91,PodSandboxId:2ceb5cba92a483d2405e8bcaa9c9cf5afc393cb33a66d4c96a4f774df12c7a89,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1702418724628373953,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-9qbm2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f980d6db-f2b0-40e8-a4ab-a8f9a8746c2c,},Annotations:map[string]string{io.kubernetes.container.hash: dc4ef1fc,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c626226dd367436ed15e97ef6fb405cacb39d7b199d71e75e414222202006c,PodSandboxId:579d1fdad5ef5b0f5931f6a423ea6114436db008de23d7aae5def900a5055953,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17024187
02791044868,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-9r5ds,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 533d35a4-b8a6-4273-8ca1-b4126c10ec65,},Annotations:map[string]string{io.kubernetes.container.hash: 70ad9c71,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:041d4cac61ba979536ec65a3a6199ea430cc38676e6dedd99e5f625fe87e95dd,PodSandboxId:ea58cc5be71f0621444b314e89d0e2adbc57e047ecc16409915b2c3e28fe422d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1702418688029925294,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pts6h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5f162bf0-b36e-4d15-bf62-0dbadd581cd8,},Annotations:map[string]string{io.kubernetes.container.hash: ce9bca44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef04e95ad1fe933b031cad975c37a81891ea763c78a0635e4ed9e5a08545d83,PodSandboxId:fac761f91352a61431ccf85bcb6b64d7fe27631568accb2971a821fd80a08a8b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner
@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1702418679799337899,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-qccjq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c22d109-bee3-43f0-af69-51c2db7ebc0b,},Annotations:map[string]string{io.kubernetes.container.hash: d139ac8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb98f8d7789efc274b1cf924697cb141142e383644b53888a17ba15c569025,PodSandboxId:e0d58ff32db7cedbfc823cb05940833bc6692c8a98b08c77d69165ffc769df31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s
-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702418659624390243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fed1501-c613-4f8b-adb0-b94e9c7c30d8,},Annotations:map[string]string{io.kubernetes.container.hash: c639785a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d919eb6810b21cbb9196d13258c29f88172688f2a3fc47a26646d954fe44437,PodSandboxId:ee2881903ffaa04fd4b2dbda90ca8ba7f4cadeb1b9402775cb059db257ea329f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube
-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702418652795496165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pjzv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e145d1c-391a-4995-9f1d-d133d382adc4,},Annotations:map[string]string{io.kubernetes.container.hash: e0d07811,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20f0abe6fb84210558e8488bcd0c3d00bc1a70c4ee1a58c3509d87d431f7e88,PodSandboxId:00b8b393b274b3191069b82176fee3eea8f5fc9fc6cfb7895b7e8de0d9b908de,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044a
eb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702418642317442308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wwrgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00c258da-10ba-4e62-9ae4-0ec482dc23ea,},Annotations:map[string]string{io.kubernetes.container.hash: d3b7a77a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53bacbebf69a069a9e72b6111c84f56ce650ba89aeb0470ce89711fc7f26274e,PodSandboxId:65a204f35462c9e03b4bd71d04a14264802ff79a4fbef69308f49855e69221b0,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702418617835782518,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d32cb5b6e84fc769121d58d6fc32c246,},Annotations:map[string]string{io.kubernetes.container.hash: b47b8a21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e321afdf117e7d05e267871cf0f1a175739a242ca3e0b4ff06876917f8d35d44,PodSandboxId:1fbfb04e9bf39d63cc151d9498b201f81efcce516aeb4a200fa6f61237bc9ea1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d
4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702418617617557832,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ee24e51ef69edeea560f95940655be4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a13315939f9f84a673ce03e6ee0001b426e72d67e06f33b3ea07a6b196f3292,PodSandboxId:5193fa5ec2c64d8b6a6f047eb72466cb3fce272b153bca331293d49f53784304,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e
7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702418617042580820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95206490b18d61bd7c5803d7aafb2e07,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1a80dbf9b6a168ae47ecd32107e17d0fe6dbefcb1ca495a7f868935f3244fa,PodSandboxId:abe71fe755873f9471bb01f8c2b737b4dd8541887031d0ec4a58b5023274b491,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37d
b33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702418617020007277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-361656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc3200671f47f1ba478f0e65eded0e8,},Annotations:map[string]string{io.kubernetes.container.hash: 10631f5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fe95486f-b71a-4535-a552-48e1dca55bea name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d8a3cb2b14e2f       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   11126c04826ce       hello-world-app-5d77478584-99x22
	855635ac50884       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   867a762363d97       headlamp-777fd4b855-wvjmv
	e1552202dfef0       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   8ffd8386b6c04       nginx
	91c228d72c688       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   2ceb5cba92a48       gcp-auth-d4c87556c-9qbm2
	01c626226dd36       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     2                   579d1fdad5ef5       ingress-nginx-admission-patch-9r5ds
	041d4cac61ba9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   ea58cc5be71f0       ingress-nginx-admission-create-pts6h
	aef04e95ad1fe       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   fac761f91352a       local-path-provisioner-78b46b4d5c-qccjq
	96fb98f8d7789       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   e0d58ff32db7c       storage-provisioner
	0d919eb6810b2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             3 minutes ago       Running             kube-proxy                0                   ee2881903ffaa       kube-proxy-pjzv8
	a20f0abe6fb84       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   00b8b393b274b       coredns-5dd5756b68-wwrgq
	53bacbebf69a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   65a204f35462c       etcd-addons-361656
	e321afdf117e7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   1fbfb04e9bf39       kube-scheduler-addons-361656
	8a13315939f9f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   5193fa5ec2c64       kube-controller-manager-addons-361656
	6e1a80dbf9b6a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   abe71fe755873       kube-apiserver-addons-361656
	
	* 
	* ==> coredns [a20f0abe6fb84210558e8488bcd0c3d00bc1a70c4ee1a58c3509d87d431f7e88] <==
	* [INFO] 10.244.0.8:56570 - 272 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131815s
	[INFO] 10.244.0.8:57194 - 4023 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073234s
	[INFO] 10.244.0.8:57194 - 60853 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099627s
	[INFO] 10.244.0.8:52834 - 44886 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058108s
	[INFO] 10.244.0.8:52834 - 31317 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100993s
	[INFO] 10.244.0.8:46507 - 62039 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006889s
	[INFO] 10.244.0.8:46507 - 30548 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106004s
	[INFO] 10.244.0.8:53444 - 37594 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00016213s
	[INFO] 10.244.0.8:53444 - 61150 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074528s
	[INFO] 10.244.0.8:40026 - 38970 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110441s
	[INFO] 10.244.0.8:40026 - 55356 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108454s
	[INFO] 10.244.0.8:47959 - 25113 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112081s
	[INFO] 10.244.0.8:47959 - 26911 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000129185s
	[INFO] 10.244.0.8:55228 - 37550 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008047s
	[INFO] 10.244.0.8:55228 - 3759 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093696s
	[INFO] 10.244.0.21:36387 - 50100 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000659174s
	[INFO] 10.244.0.21:57515 - 60196 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000411159s
	[INFO] 10.244.0.21:55002 - 14582 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016397s
	[INFO] 10.244.0.21:37650 - 24884 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105272s
	[INFO] 10.244.0.21:60081 - 59318 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000145153s
	[INFO] 10.244.0.21:56608 - 23169 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000061474s
	[INFO] 10.244.0.21:33512 - 36488 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000799017s
	[INFO] 10.244.0.21:46273 - 13739 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000494279s
	[INFO] 10.244.0.24:50119 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000333661s
	[INFO] 10.244.0.24:36432 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149901s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-361656
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-361656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=addons-361656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_03_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-361656
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-361656
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:07:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:06:49 +0000   Tue, 12 Dec 2023 22:03:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:06:49 +0000   Tue, 12 Dec 2023 22:03:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:06:49 +0000   Tue, 12 Dec 2023 22:03:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:06:49 +0000   Tue, 12 Dec 2023 22:03:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    addons-361656
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fb13094a34742a49f4a05303ab76d4b
	  System UUID:                4fb13094-a347-42a4-9f4a-05303ab76d4b
	  Boot ID:                    586aec9a-7669-45f5-9ea9-92113e5b0f97
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-99x22           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-d4c87556c-9qbm2                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  headlamp                    headlamp-777fd4b855-wvjmv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 coredns-5dd5756b68-wwrgq                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m8s
	  kube-system                 etcd-addons-361656                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-apiserver-addons-361656               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-controller-manager-addons-361656      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-pjzv8                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-addons-361656               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  local-path-storage          local-path-provisioner-78b46b4d5c-qccjq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  Starting                 4m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m29s (x8 over 4m29s)  kubelet          Node addons-361656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s (x8 over 4m29s)  kubelet          Node addons-361656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s (x7 over 4m29s)  kubelet          Node addons-361656 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node addons-361656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node addons-361656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node addons-361656 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m21s                  kubelet          Node addons-361656 status is now: NodeReady
	  Normal  RegisteredNode           4m9s                   node-controller  Node addons-361656 event: Registered Node addons-361656 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.059867] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.783760] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.106659] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.135210] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.103027] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.211475] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.086423] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +8.777420] systemd-fstab-generator[1244]: Ignoring "noauto" for root device
	[Dec12 22:04] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.970237] kauditd_printk_skb: 64 callbacks suppressed
	[ +23.118308] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.701765] kauditd_printk_skb: 20 callbacks suppressed
	[Dec12 22:05] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.053734] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.624757] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.532133] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.085550] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.124870] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.386857] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.008334] kauditd_printk_skb: 3 callbacks suppressed
	[Dec12 22:06] kauditd_printk_skb: 12 callbacks suppressed
	[Dec12 22:07] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [53bacbebf69a069a9e72b6111c84f56ce650ba89aeb0470ce89711fc7f26274e] <==
	* {"level":"warn","ts":"2023-12-12T22:04:52.682304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.520428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2023-12-12T22:04:52.68236Z","caller":"traceutil/trace.go:171","msg":"trace[52397965] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1013; }","duration":"101.649564ms","start":"2023-12-12T22:04:52.580703Z","end":"2023-12-12T22:04:52.682353Z","steps":["trace[52397965] 'agreement among raft nodes before linearized reading'  (duration: 101.477962ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:04:52.682335Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T22:04:52.329953Z","time spent":"352.2696ms","remote":"127.0.0.1:37198","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1007 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-12-12T22:04:52.681416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.888647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82003"}
	{"level":"info","ts":"2023-12-12T22:04:52.682638Z","caller":"traceutil/trace.go:171","msg":"trace[596890108] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1013; }","duration":"257.1237ms","start":"2023-12-12T22:04:52.425506Z","end":"2023-12-12T22:04:52.682629Z","steps":["trace[596890108] 'agreement among raft nodes before linearized reading'  (duration: 255.694017ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:05:07.559417Z","caller":"traceutil/trace.go:171","msg":"trace[95224457] transaction","detail":"{read_only:false; response_revision:1094; number_of_response:1; }","duration":"225.553617ms","start":"2023-12-12T22:05:07.333657Z","end":"2023-12-12T22:05:07.55921Z","steps":["trace[95224457] 'process raft request'  (duration: 225.167074ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:05:07.560192Z","caller":"traceutil/trace.go:171","msg":"trace[4175013] linearizableReadLoop","detail":"{readStateIndex:1126; appliedIndex:1126; }","duration":"137.083868ms","start":"2023-12-12T22:05:07.423098Z","end":"2023-12-12T22:05:07.560182Z","steps":["trace[4175013] 'read index received'  (duration: 137.079938ms)","trace[4175013] 'applied index is now lower than readState.Index'  (duration: 3.352µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T22:05:07.570921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.761689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82331"}
	{"level":"info","ts":"2023-12-12T22:05:07.577366Z","caller":"traceutil/trace.go:171","msg":"trace[743277275] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1094; }","duration":"154.272353ms","start":"2023-12-12T22:05:07.423076Z","end":"2023-12-12T22:05:07.577348Z","steps":["trace[743277275] 'agreement among raft nodes before linearized reading'  (duration: 137.632963ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:05:22.608841Z","caller":"traceutil/trace.go:171","msg":"trace[342607671] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"493.136081ms","start":"2023-12-12T22:05:22.115691Z","end":"2023-12-12T22:05:22.608827Z","steps":["trace[342607671] 'process raft request'  (duration: 493.034464ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:05:22.608977Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T22:05:22.115665Z","time spent":"493.243887ms","remote":"127.0.0.1:37222","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<>"}
	{"level":"info","ts":"2023-12-12T22:05:22.613774Z","caller":"traceutil/trace.go:171","msg":"trace[1516690207] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1197; }","duration":"223.792388ms","start":"2023-12-12T22:05:22.389969Z","end":"2023-12-12T22:05:22.613761Z","steps":["trace[1516690207] 'read index received'  (duration: 218.83761ms)","trace[1516690207] 'applied index is now lower than readState.Index'  (duration: 4.954202ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:05:22.614484Z","caller":"traceutil/trace.go:171","msg":"trace[2063432571] transaction","detail":"{read_only:false; response_revision:1164; number_of_response:1; }","duration":"403.922965ms","start":"2023-12-12T22:05:22.210448Z","end":"2023-12-12T22:05:22.614371Z","steps":["trace[2063432571] 'process raft request'  (duration: 403.244604ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:05:22.614838Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T22:05:22.210421Z","time spent":"404.214471ms","remote":"127.0.0.1:37178","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":732,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-7c6974c4d8-b8x52.17a034cedc24c5c6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-7c6974c4d8-b8x52.17a034cedc24c5c6\" value_size:625 lease:985879712630515487 >> failure:<>"}
	{"level":"warn","ts":"2023-12-12T22:05:22.616224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.250441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2023-12-12T22:05:22.616616Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.835354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82331"}
	{"level":"info","ts":"2023-12-12T22:05:22.616689Z","caller":"traceutil/trace.go:171","msg":"trace[760696770] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1164; }","duration":"192.915518ms","start":"2023-12-12T22:05:22.423764Z","end":"2023-12-12T22:05:22.616679Z","steps":["trace[760696770] 'agreement among raft nodes before linearized reading'  (duration: 192.695345ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:05:22.616479Z","caller":"traceutil/trace.go:171","msg":"trace[1808990297] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1164; }","duration":"226.522088ms","start":"2023-12-12T22:05:22.389945Z","end":"2023-12-12T22:05:22.616467Z","steps":["trace[1808990297] 'agreement among raft nodes before linearized reading'  (duration: 225.802645ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:05:22.617392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.276971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-znnlk.17a034bf93992f0b\" ","response":"range_response_count:1 size:808"}
	{"level":"info","ts":"2023-12-12T22:05:22.617503Z","caller":"traceutil/trace.go:171","msg":"trace[1071657038] range","detail":"{range_begin:/registry/events/gadget/gadget-znnlk.17a034bf93992f0b; range_end:; response_count:1; response_revision:1164; }","duration":"170.392356ms","start":"2023-12-12T22:05:22.447102Z","end":"2023-12-12T22:05:22.617495Z","steps":["trace[1071657038] 'agreement among raft nodes before linearized reading'  (duration: 170.121629ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:05:48.651708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.389147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-12T22:05:48.651902Z","caller":"traceutil/trace.go:171","msg":"trace[440542841] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1434; }","duration":"108.695015ms","start":"2023-12-12T22:05:48.543182Z","end":"2023-12-12T22:05:48.651877Z","steps":["trace[440542841] 'count revisions from in-memory index tree'  (duration: 108.247983ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:06:21.374375Z","caller":"traceutil/trace.go:171","msg":"trace[756822565] transaction","detail":"{read_only:false; response_revision:1544; number_of_response:1; }","duration":"103.330265ms","start":"2023-12-12T22:06:21.271023Z","end":"2023-12-12T22:06:21.374354Z","steps":["trace[756822565] 'process raft request'  (duration: 103.109608ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:06:25.80671Z","caller":"traceutil/trace.go:171","msg":"trace[973700587] transaction","detail":"{read_only:false; response_revision:1566; number_of_response:1; }","duration":"168.73689ms","start":"2023-12-12T22:06:25.637957Z","end":"2023-12-12T22:06:25.806694Z","steps":["trace[973700587] 'process raft request'  (duration: 168.630443ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:07:02.303412Z","caller":"traceutil/trace.go:171","msg":"trace[1282758935] transaction","detail":"{read_only:false; response_revision:1778; number_of_response:1; }","duration":"261.319001ms","start":"2023-12-12T22:07:02.042075Z","end":"2023-12-12T22:07:02.303394Z","steps":["trace[1282758935] 'process raft request'  (duration: 261.074995ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [91c228d72c6887e427e4b02bcdadbb57a5a4f6ba35695daa5ad74c544d1deb91] <==
	* 2023/12/12 22:05:24 GCP Auth Webhook started!
	2023/12/12 22:05:30 Ready to marshal response ...
	2023/12/12 22:05:30 Ready to write response ...
	2023/12/12 22:05:32 Ready to marshal response ...
	2023/12/12 22:05:32 Ready to write response ...
	2023/12/12 22:05:35 Ready to marshal response ...
	2023/12/12 22:05:35 Ready to write response ...
	2023/12/12 22:05:41 Ready to marshal response ...
	2023/12/12 22:05:41 Ready to write response ...
	2023/12/12 22:05:41 Ready to marshal response ...
	2023/12/12 22:05:41 Ready to write response ...
	2023/12/12 22:05:45 Ready to marshal response ...
	2023/12/12 22:05:45 Ready to write response ...
	2023/12/12 22:05:45 Ready to marshal response ...
	2023/12/12 22:05:45 Ready to write response ...
	2023/12/12 22:05:45 Ready to marshal response ...
	2023/12/12 22:05:45 Ready to write response ...
	2023/12/12 22:05:56 Ready to marshal response ...
	2023/12/12 22:05:56 Ready to write response ...
	2023/12/12 22:06:21 Ready to marshal response ...
	2023/12/12 22:06:21 Ready to write response ...
	2023/12/12 22:06:39 Ready to marshal response ...
	2023/12/12 22:06:39 Ready to write response ...
	2023/12/12 22:07:53 Ready to marshal response ...
	2023/12/12 22:07:53 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:08:05 up 4 min,  0 users,  load average: 1.07, 1.93, 0.98
	Linux addons-361656 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6e1a80dbf9b6a168ae47ecd32107e17d0fe6dbefcb1ca495a7f868935f3244fa] <==
	* I1212 22:05:41.017563       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 22:05:45.107977       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.169.92"}
	I1212 22:06:34.921217       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 22:06:45.977971       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 22:06:57.214920       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:57.215060       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:57.228688       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:57.228800       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:57.244070       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:57.244324       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:57.257831       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:57.257934       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:57.271208       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:57.271346       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:57.277456       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:57.277517       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:57.283980       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:57.284047       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:57.286067       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:57.286131       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 22:06:58.277412       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 22:06:58.284976       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1212 22:06:58.309927       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1212 22:07:54.214530       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.19.38"}
	E1212 22:07:56.873045       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [8a13315939f9f84a673ce03e6ee0001b426e72d67e06f33b3ea07a6b196f3292] <==
	* W1212 22:07:17.338621       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:17.338714       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 22:07:26.763292       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1212 22:07:26.763382       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 22:07:27.191559       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1212 22:07:27.191681       1 shared_informer.go:318] Caches are synced for garbage collector
	W1212 22:07:31.363501       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:31.363548       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:07:33.037567       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:33.037603       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:07:34.685491       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:34.685698       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:07:42.225711       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:42.225937       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 22:07:53.949697       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1212 22:07:53.984089       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-99x22"
	I1212 22:07:54.011620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.563689ms"
	I1212 22:07:54.022210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.394405ms"
	I1212 22:07:54.023583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="224.58µs"
	I1212 22:07:54.039833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="68.442µs"
	I1212 22:07:56.713524       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1212 22:07:56.724412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="12.56µs"
	I1212 22:07:56.732041       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 22:07:56.852429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.951285ms"
	I1212 22:07:56.852675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="128.655µs"
	
	* 
	* ==> kube-proxy [0d919eb6810b21cbb9196d13258c29f88172688f2a3fc47a26646d954fe44437] <==
	* I1212 22:04:15.456155       1 server_others.go:69] "Using iptables proxy"
	I1212 22:04:15.795927       1 node.go:141] Successfully retrieved node IP: 192.168.39.86
	I1212 22:04:16.651575       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 22:04:16.651645       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 22:04:16.733525       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:04:16.733611       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:04:16.733811       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:04:16.733821       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:04:16.769079       1 config.go:188] "Starting service config controller"
	I1212 22:04:16.769140       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:04:16.769178       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:04:16.769182       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:04:16.769673       1 config.go:315] "Starting node config controller"
	I1212 22:04:16.769679       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:04:17.005861       1 shared_informer.go:318] Caches are synced for node config
	I1212 22:04:17.097447       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:04:17.116515       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [e321afdf117e7d05e267871cf0f1a175739a242ca3e0b4ff06876917f8d35d44] <==
	* W1212 22:03:41.191223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 22:03:41.191314       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 22:03:41.191575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 22:03:41.191687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 22:03:42.032303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:03:42.032358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:03:42.083752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:03:42.083811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 22:03:42.099634       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 22:03:42.099712       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 22:03:42.181802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 22:03:42.181892       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 22:03:42.182178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 22:03:42.182200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 22:03:42.190770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:03:42.190912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 22:03:42.192375       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:03:42.192440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 22:03:42.274290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 22:03:42.274392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 22:03:42.364140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:03:42.364347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 22:03:42.636358       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:03:42.636567       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1212 22:03:44.472455       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 22:03:14 UTC, ends at Tue 2023-12-12 22:08:05 UTC. --
	Dec 12 22:07:53 addons-361656 kubelet[1251]: I1212 22:07:53.993410    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="53805fe1-a499-4ccd-b440-ffd51f141d44" containerName="volume-snapshot-controller"
	Dec 12 22:07:53 addons-361656 kubelet[1251]: I1212 22:07:53.993416    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="231b62bc-5fa9-4829-a7d6-9b5ff69477b3" containerName="csi-attacher"
	Dec 12 22:07:54 addons-361656 kubelet[1251]: I1212 22:07:54.080701    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c5d8c21c-f196-454c-b323-28842ec5c5b8-gcp-creds\") pod \"hello-world-app-5d77478584-99x22\" (UID: \"c5d8c21c-f196-454c-b323-28842ec5c5b8\") " pod="default/hello-world-app-5d77478584-99x22"
	Dec 12 22:07:54 addons-361656 kubelet[1251]: I1212 22:07:54.080748    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhpj\" (UniqueName: \"kubernetes.io/projected/c5d8c21c-f196-454c-b323-28842ec5c5b8-kube-api-access-tzhpj\") pod \"hello-world-app-5d77478584-99x22\" (UID: \"c5d8c21c-f196-454c-b323-28842ec5c5b8\") " pod="default/hello-world-app-5d77478584-99x22"
	Dec 12 22:07:55 addons-361656 kubelet[1251]: I1212 22:07:55.392453    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqmsw\" (UniqueName: \"kubernetes.io/projected/0df541c9-11fb-444a-9fd1-cf852f2d1bd4-kube-api-access-kqmsw\") pod \"0df541c9-11fb-444a-9fd1-cf852f2d1bd4\" (UID: \"0df541c9-11fb-444a-9fd1-cf852f2d1bd4\") "
	Dec 12 22:07:55 addons-361656 kubelet[1251]: I1212 22:07:55.397015    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df541c9-11fb-444a-9fd1-cf852f2d1bd4-kube-api-access-kqmsw" (OuterVolumeSpecName: "kube-api-access-kqmsw") pod "0df541c9-11fb-444a-9fd1-cf852f2d1bd4" (UID: "0df541c9-11fb-444a-9fd1-cf852f2d1bd4"). InnerVolumeSpecName "kube-api-access-kqmsw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 22:07:55 addons-361656 kubelet[1251]: I1212 22:07:55.493098    1251 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kqmsw\" (UniqueName: \"kubernetes.io/projected/0df541c9-11fb-444a-9fd1-cf852f2d1bd4-kube-api-access-kqmsw\") on node \"addons-361656\" DevicePath \"\""
	Dec 12 22:07:55 addons-361656 kubelet[1251]: I1212 22:07:55.773476    1251 scope.go:117] "RemoveContainer" containerID="86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b"
	Dec 12 22:07:55 addons-361656 kubelet[1251]: I1212 22:07:55.842671    1251 scope.go:117] "RemoveContainer" containerID="86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b"
	Dec 12 22:07:55 addons-361656 kubelet[1251]: E1212 22:07:55.857562    1251 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b\": container with ID starting with 86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b not found: ID does not exist" containerID="86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b"
	Dec 12 22:07:55 addons-361656 kubelet[1251]: I1212 22:07:55.857656    1251 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b"} err="failed to get container status \"86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b\": rpc error: code = NotFound desc = could not find container \"86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b\": container with ID starting with 86c95a1a0e6438bdb55a6f9b3f57b45ecc88d6b86b827d38f6a25a5ad7b9927b not found: ID does not exist"
	Dec 12 22:07:56 addons-361656 kubelet[1251]: I1212 22:07:56.444160    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0df541c9-11fb-444a-9fd1-cf852f2d1bd4" path="/var/lib/kubelet/pods/0df541c9-11fb-444a-9fd1-cf852f2d1bd4/volumes"
	Dec 12 22:07:58 addons-361656 kubelet[1251]: I1212 22:07:58.443846    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="533d35a4-b8a6-4273-8ca1-b4126c10ec65" path="/var/lib/kubelet/pods/533d35a4-b8a6-4273-8ca1-b4126c10ec65/volumes"
	Dec 12 22:07:58 addons-361656 kubelet[1251]: I1212 22:07:58.444421    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5f162bf0-b36e-4d15-bf62-0dbadd581cd8" path="/var/lib/kubelet/pods/5f162bf0-b36e-4d15-bf62-0dbadd581cd8/volumes"
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.126130    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e4a747a-0126-4ff4-9292-5071b16cb10d-webhook-cert\") pod \"6e4a747a-0126-4ff4-9292-5071b16cb10d\" (UID: \"6e4a747a-0126-4ff4-9292-5071b16cb10d\") "
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.126217    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p44pv\" (UniqueName: \"kubernetes.io/projected/6e4a747a-0126-4ff4-9292-5071b16cb10d-kube-api-access-p44pv\") pod \"6e4a747a-0126-4ff4-9292-5071b16cb10d\" (UID: \"6e4a747a-0126-4ff4-9292-5071b16cb10d\") "
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.128728    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e4a747a-0126-4ff4-9292-5071b16cb10d-kube-api-access-p44pv" (OuterVolumeSpecName: "kube-api-access-p44pv") pod "6e4a747a-0126-4ff4-9292-5071b16cb10d" (UID: "6e4a747a-0126-4ff4-9292-5071b16cb10d"). InnerVolumeSpecName "kube-api-access-p44pv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.130403    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e4a747a-0126-4ff4-9292-5071b16cb10d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6e4a747a-0126-4ff4-9292-5071b16cb10d" (UID: "6e4a747a-0126-4ff4-9292-5071b16cb10d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.226707    1251 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p44pv\" (UniqueName: \"kubernetes.io/projected/6e4a747a-0126-4ff4-9292-5071b16cb10d-kube-api-access-p44pv\") on node \"addons-361656\" DevicePath \"\""
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.226770    1251 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e4a747a-0126-4ff4-9292-5071b16cb10d-webhook-cert\") on node \"addons-361656\" DevicePath \"\""
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.444174    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6e4a747a-0126-4ff4-9292-5071b16cb10d" path="/var/lib/kubelet/pods/6e4a747a-0126-4ff4-9292-5071b16cb10d/volumes"
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.821376    1251 scope.go:117] "RemoveContainer" containerID="ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785"
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.845216    1251 scope.go:117] "RemoveContainer" containerID="ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785"
	Dec 12 22:08:00 addons-361656 kubelet[1251]: E1212 22:08:00.845813    1251 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785\": container with ID starting with ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785 not found: ID does not exist" containerID="ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785"
	Dec 12 22:08:00 addons-361656 kubelet[1251]: I1212 22:08:00.845883    1251 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785"} err="failed to get container status \"ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785\": rpc error: code = NotFound desc = could not find container \"ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785\": container with ID starting with ebe21b022b1d691306ae42da43eeadce83c72528b4827856d1d8be790aa77785 not found: ID does not exist"
	
	* 
	* ==> storage-provisioner [96fb98f8d7789efc274b1cf924697cb141142e383644b53888a17ba15c569025] <==
	* I1212 22:04:20.715608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 22:04:20.897013       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 22:04:20.897078       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 22:04:21.184084       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 22:04:21.185913       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b668db0-2bbe-45b0-8f6e-10f119371cae", APIVersion:"v1", ResourceVersion:"875", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-361656_f5cf244a-ae26-4b2a-bce3-c141f5fa1dfa became leader
	I1212 22:04:21.185982       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-361656_f5cf244a-ae26-4b2a-bce3-c141f5fa1dfa!
	I1212 22:04:21.386826       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-361656_f5cf244a-ae26-4b2a-bce3-c141f5fa1dfa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-361656 -n addons-361656
helpers_test.go:261: (dbg) Run:  kubectl --context addons-361656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (160.99s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-361656
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-361656: exit status 82 (2m1.728703008s)

                                                
                                                
-- stdout --
	* Stopping node "addons-361656"  ...
	* Stopping node "addons-361656"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-361656" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-361656
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-361656: exit status 11 (21.584777374s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.86:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-361656" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-361656
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-361656: exit status 11 (6.143547871s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.86:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-361656" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-361656
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-361656: exit status 11 (6.143963759s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.86:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-361656" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image ls --format short --alsologtostderr: (2.484089381s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136031 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136031 image ls --format short --alsologtostderr:
I1212 22:14:58.820425   91668 out.go:296] Setting OutFile to fd 1 ...
I1212 22:14:58.820636   91668 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:14:58.820661   91668 out.go:309] Setting ErrFile to fd 2...
I1212 22:14:58.820678   91668 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:14:58.821039   91668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
I1212 22:14:58.821908   91668 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:14:58.822061   91668 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:14:58.822709   91668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:14:58.822758   91668 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:14:58.839302   91668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
I1212 22:14:58.839846   91668 main.go:141] libmachine: () Calling .GetVersion
I1212 22:14:58.840594   91668 main.go:141] libmachine: Using API Version  1
I1212 22:14:58.840619   91668 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:14:58.841083   91668 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:14:58.841275   91668 main.go:141] libmachine: (functional-136031) Calling .GetState
I1212 22:14:58.843191   91668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:14:58.843226   91668 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:14:58.859946   91668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
I1212 22:14:58.860348   91668 main.go:141] libmachine: () Calling .GetVersion
I1212 22:14:58.860783   91668 main.go:141] libmachine: Using API Version  1
I1212 22:14:58.860800   91668 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:14:58.861144   91668 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:14:58.861442   91668 main.go:141] libmachine: (functional-136031) Calling .DriverName
I1212 22:14:58.861682   91668 ssh_runner.go:195] Run: systemctl --version
I1212 22:14:58.861713   91668 main.go:141] libmachine: (functional-136031) Calling .GetSSHHostname
I1212 22:14:58.864326   91668 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:14:58.864706   91668 main.go:141] libmachine: (functional-136031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:2e:47", ip: ""} in network mk-functional-136031: {Iface:virbr1 ExpiryTime:2023-12-12 23:12:08 +0000 UTC Type:0 Mac:52:54:00:da:2e:47 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:functional-136031 Clientid:01:52:54:00:da:2e:47}
I1212 22:14:58.864732   91668 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined IP address 192.168.50.133 and MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:14:58.864888   91668 main.go:141] libmachine: (functional-136031) Calling .GetSSHPort
I1212 22:14:58.865115   91668 main.go:141] libmachine: (functional-136031) Calling .GetSSHKeyPath
I1212 22:14:58.865276   91668 main.go:141] libmachine: (functional-136031) Calling .GetSSHUsername
I1212 22:14:58.865456   91668 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/functional-136031/id_rsa Username:docker}
I1212 22:14:59.044983   91668 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 22:15:01.151741   91668 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.106708206s)
W1212 22:15:01.151854   91668 cache_images.go:715] Failed to list images for profile functional-136031 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1212 22:15:01.145709    7370 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2023-12-12T22:15:01Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I1212 22:15:01.151957   91668 main.go:141] libmachine: Making call to close driver server
I1212 22:15:01.151978   91668 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:01.152278   91668 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:01.152310   91668 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 22:15:01.152320   91668 main.go:141] libmachine: Making call to close driver server
I1212 22:15:01.152330   91668 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:01.152577   91668 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:01.152595   91668 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (169.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-220067 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1212 22:16:47.095414   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-220067 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.624027634s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-220067 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-220067 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9e4fd974-1e2b-49ab-9b6c-d12e26372b63] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9e4fd974-1e2b-49ab-9b6c-d12e26372b63] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.014486548s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-220067 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1212 22:18:09.015916   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-220067 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.882940467s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-220067 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-220067 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.145
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-220067 addons disable ingress-dns --alsologtostderr -v=1
E1212 22:19:17.803749   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:17.809119   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:17.819406   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:17.839736   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:17.880117   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:17.960429   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:18.120871   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:18.441492   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:19.082470   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-220067 addons disable ingress-dns --alsologtostderr -v=1: (2.484813264s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-220067 addons disable ingress --alsologtostderr -v=1
E1212 22:19:20.363369   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:22.924297   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-220067 addons disable ingress --alsologtostderr -v=1: (7.734229274s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-220067 -n ingress-addon-legacy-220067
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-220067 logs -n 25
E1212 22:19:28.045322   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-220067 logs -n 25: (1.232427544s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-136031 ssh sudo                                             | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC | 12 Dec 23 22:14 UTC |
	|                | umount -f /mount-9p                                                    |                             |         |         |                     |                     |
	| update-context | functional-136031                                                      | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC | 12 Dec 23 22:14 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-136031                                                   | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-136031                                                   | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-136031 ssh findmnt                                          | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-136031                                                   | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| image          | functional-136031                                                      | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC | 12 Dec 23 22:15 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-136031 ssh findmnt                                          | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC | 12 Dec 23 22:14 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-136031 ssh findmnt                                          | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:14 UTC | 12 Dec 23 22:14 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-136031 ssh findmnt                                          | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC | 12 Dec 23 22:15 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-136031                                                   | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| image          | functional-136031                                                      | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC | 12 Dec 23 22:15 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-136031 ssh pgrep                                            | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-136031 image build -t                                       | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC | 12 Dec 23 22:15 UTC |
	|                | localhost/my-image:functional-136031                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-136031                                                      | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC | 12 Dec 23 22:15 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-136031                                                      | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC | 12 Dec 23 22:15 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-136031 image ls                                             | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC | 12 Dec 23 22:15 UTC |
	| delete         | -p functional-136031                                                   | functional-136031           | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC | 12 Dec 23 22:15 UTC |
	| start          | -p ingress-addon-legacy-220067                                         | ingress-addon-legacy-220067 | jenkins | v1.32.0 | 12 Dec 23 22:15 UTC | 12 Dec 23 22:16 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-220067                                            | ingress-addon-legacy-220067 | jenkins | v1.32.0 | 12 Dec 23 22:16 UTC | 12 Dec 23 22:16 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-220067                                            | ingress-addon-legacy-220067 | jenkins | v1.32.0 | 12 Dec 23 22:16 UTC | 12 Dec 23 22:16 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-220067                                            | ingress-addon-legacy-220067 | jenkins | v1.32.0 | 12 Dec 23 22:17 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-220067 ip                                         | ingress-addon-legacy-220067 | jenkins | v1.32.0 | 12 Dec 23 22:19 UTC | 12 Dec 23 22:19 UTC |
	| addons         | ingress-addon-legacy-220067                                            | ingress-addon-legacy-220067 | jenkins | v1.32.0 | 12 Dec 23 22:19 UTC | 12 Dec 23 22:19 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-220067                                            | ingress-addon-legacy-220067 | jenkins | v1.32.0 | 12 Dec 23 22:19 UTC | 12 Dec 23 22:19 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:15:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:15:09.583967   92149 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:15:09.584247   92149 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:15:09.584256   92149 out.go:309] Setting ErrFile to fd 2...
	I1212 22:15:09.584261   92149 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:15:09.584426   92149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 22:15:09.585047   92149 out.go:303] Setting JSON to false
	I1212 22:15:09.585934   92149 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10664,"bootTime":1702408646,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:15:09.585998   92149 start.go:138] virtualization: kvm guest
	I1212 22:15:09.588373   92149 out.go:177] * [ingress-addon-legacy-220067] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:15:09.590336   92149 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:15:09.590337   92149 notify.go:220] Checking for updates...
	I1212 22:15:09.591961   92149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:15:09.593726   92149 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:15:09.595286   92149 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:15:09.596753   92149 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:15:09.598071   92149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:15:09.599506   92149 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:15:09.635531   92149 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 22:15:09.637116   92149 start.go:298] selected driver: kvm2
	I1212 22:15:09.637137   92149 start.go:902] validating driver "kvm2" against <nil>
	I1212 22:15:09.637153   92149 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:15:09.637883   92149 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:15:09.637976   92149 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:15:09.652749   92149 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:15:09.652802   92149 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:15:09.653026   92149 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:15:09.653081   92149 cni.go:84] Creating CNI manager for ""
	I1212 22:15:09.653094   92149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:15:09.653104   92149 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 22:15:09.653115   92149 start_flags.go:323] config:
	{Name:ingress-addon-legacy-220067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-220067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:15:09.653259   92149 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:15:09.655073   92149 out.go:177] * Starting control plane node ingress-addon-legacy-220067 in cluster ingress-addon-legacy-220067
	I1212 22:15:09.656457   92149 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 22:15:09.681924   92149 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1212 22:15:09.681964   92149 cache.go:56] Caching tarball of preloaded images
	I1212 22:15:09.682104   92149 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 22:15:09.683917   92149 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1212 22:15:09.685367   92149 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:15:09.718188   92149 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1212 22:15:12.976976   92149 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:15:12.977098   92149 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:15:13.966524   92149 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1212 22:15:13.967031   92149 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/config.json ...
	I1212 22:15:13.967082   92149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/config.json: {Name:mk7e96646a8902e961cf9d6d43e675dec7e8b06f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:15:13.967316   92149 start.go:365] acquiring machines lock for ingress-addon-legacy-220067: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:15:13.967371   92149 start.go:369] acquired machines lock for "ingress-addon-legacy-220067" in 32.142µs
	I1212 22:15:13.967397   92149 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-220067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-220067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:15:13.967468   92149 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 22:15:13.969642   92149 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1212 22:15:13.969837   92149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:15:13.969896   92149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:15:13.984389   92149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I1212 22:15:13.984927   92149 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:15:13.985509   92149 main.go:141] libmachine: Using API Version  1
	I1212 22:15:13.985530   92149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:15:13.985950   92149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:15:13.986145   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetMachineName
	I1212 22:15:13.986306   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:15:13.986464   92149 start.go:159] libmachine.API.Create for "ingress-addon-legacy-220067" (driver="kvm2")
	I1212 22:15:13.986500   92149 client.go:168] LocalClient.Create starting
	I1212 22:15:13.986530   92149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem
	I1212 22:15:13.986563   92149 main.go:141] libmachine: Decoding PEM data...
	I1212 22:15:13.986579   92149 main.go:141] libmachine: Parsing certificate...
	I1212 22:15:13.986629   92149 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem
	I1212 22:15:13.986647   92149 main.go:141] libmachine: Decoding PEM data...
	I1212 22:15:13.986660   92149 main.go:141] libmachine: Parsing certificate...
	I1212 22:15:13.986676   92149 main.go:141] libmachine: Running pre-create checks...
	I1212 22:15:13.986687   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .PreCreateCheck
	I1212 22:15:13.987041   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetConfigRaw
	I1212 22:15:13.987431   92149 main.go:141] libmachine: Creating machine...
	I1212 22:15:13.987448   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .Create
	I1212 22:15:13.987610   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Creating KVM machine...
	I1212 22:15:13.988890   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found existing default KVM network
	I1212 22:15:13.989694   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:13.989533   92172 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I1212 22:15:13.995152   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | trying to create private KVM network mk-ingress-addon-legacy-220067 192.168.39.0/24...
	I1212 22:15:14.066850   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | private KVM network mk-ingress-addon-legacy-220067 192.168.39.0/24 created
	I1212 22:15:14.066886   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:14.066816   92172 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:15:14.066907   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Setting up store path in /home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067 ...
	I1212 22:15:14.066927   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Building disk image from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 22:15:14.067016   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Downloading /home/jenkins/minikube-integration/17761-76611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 22:15:14.292807   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:14.292679   92172 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa...
	I1212 22:15:14.523572   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:14.523440   92172 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/ingress-addon-legacy-220067.rawdisk...
	I1212 22:15:14.523611   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Writing magic tar header
	I1212 22:15:14.523631   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Writing SSH key tar header
	I1212 22:15:14.523645   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:14.523568   92172 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067 ...
	I1212 22:15:14.523810   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067
	I1212 22:15:14.523867   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067 (perms=drwx------)
	I1212 22:15:14.523887   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines
	I1212 22:15:14.523907   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:15:14.523922   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611
	I1212 22:15:14.523940   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 22:15:14.523950   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Checking permissions on dir: /home/jenkins
	I1212 22:15:14.523962   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Checking permissions on dir: /home
	I1212 22:15:14.523973   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Skipping /home - not owner
	I1212 22:15:14.523986   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines (perms=drwxr-xr-x)
	I1212 22:15:14.524010   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube (perms=drwxr-xr-x)
	I1212 22:15:14.524023   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611 (perms=drwxrwxr-x)
	I1212 22:15:14.524034   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 22:15:14.524050   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 22:15:14.524094   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Creating domain...
	I1212 22:15:14.525191   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) define libvirt domain using xml: 
	I1212 22:15:14.525213   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) <domain type='kvm'>
	I1212 22:15:14.525226   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   <name>ingress-addon-legacy-220067</name>
	I1212 22:15:14.525235   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   <memory unit='MiB'>4096</memory>
	I1212 22:15:14.525245   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   <vcpu>2</vcpu>
	I1212 22:15:14.525254   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   <features>
	I1212 22:15:14.525265   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <acpi/>
	I1212 22:15:14.525275   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <apic/>
	I1212 22:15:14.525286   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <pae/>
	I1212 22:15:14.525305   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     
	I1212 22:15:14.525318   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   </features>
	I1212 22:15:14.525332   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   <cpu mode='host-passthrough'>
	I1212 22:15:14.525348   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   
	I1212 22:15:14.525360   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   </cpu>
	I1212 22:15:14.525371   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   <os>
	I1212 22:15:14.525401   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <type>hvm</type>
	I1212 22:15:14.525430   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <boot dev='cdrom'/>
	I1212 22:15:14.525453   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <boot dev='hd'/>
	I1212 22:15:14.525488   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <bootmenu enable='no'/>
	I1212 22:15:14.525515   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   </os>
	I1212 22:15:14.525525   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   <devices>
	I1212 22:15:14.525538   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <disk type='file' device='cdrom'>
	I1212 22:15:14.525565   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/boot2docker.iso'/>
	I1212 22:15:14.525579   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <target dev='hdc' bus='scsi'/>
	I1212 22:15:14.525593   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <readonly/>
	I1212 22:15:14.525610   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     </disk>
	I1212 22:15:14.525622   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <disk type='file' device='disk'>
	I1212 22:15:14.525638   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 22:15:14.525655   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/ingress-addon-legacy-220067.rawdisk'/>
	I1212 22:15:14.525669   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <target dev='hda' bus='virtio'/>
	I1212 22:15:14.525682   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     </disk>
	I1212 22:15:14.525697   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <interface type='network'>
	I1212 22:15:14.525721   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <source network='mk-ingress-addon-legacy-220067'/>
	I1212 22:15:14.525745   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <model type='virtio'/>
	I1212 22:15:14.525772   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     </interface>
	I1212 22:15:14.525792   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <interface type='network'>
	I1212 22:15:14.525808   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <source network='default'/>
	I1212 22:15:14.525820   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <model type='virtio'/>
	I1212 22:15:14.525838   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     </interface>
	I1212 22:15:14.525851   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <serial type='pty'>
	I1212 22:15:14.525865   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <target port='0'/>
	I1212 22:15:14.525879   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     </serial>
	I1212 22:15:14.525899   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <console type='pty'>
	I1212 22:15:14.525917   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <target type='serial' port='0'/>
	I1212 22:15:14.525931   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     </console>
	I1212 22:15:14.525943   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     <rng model='virtio'>
	I1212 22:15:14.525959   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)       <backend model='random'>/dev/random</backend>
	I1212 22:15:14.525972   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     </rng>
	I1212 22:15:14.525984   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     
	I1212 22:15:14.526000   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)     
	I1212 22:15:14.526013   92149 main.go:141] libmachine: (ingress-addon-legacy-220067)   </devices>
	I1212 22:15:14.526023   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) </domain>
	I1212 22:15:14.526038   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) 
	I1212 22:15:14.530157   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:49:c4:97 in network default
	I1212 22:15:14.530800   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Ensuring networks are active...
	I1212 22:15:14.530833   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:14.531626   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Ensuring network default is active
	I1212 22:15:14.531982   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Ensuring network mk-ingress-addon-legacy-220067 is active
	I1212 22:15:14.532548   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Getting domain xml...
	I1212 22:15:14.533303   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Creating domain...
	I1212 22:15:15.760572   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Waiting to get IP...
	I1212 22:15:15.761498   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:15.761908   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:15.761939   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:15.761891   92172 retry.go:31] will retry after 289.189582ms: waiting for machine to come up
	I1212 22:15:16.052386   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:16.052897   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:16.052924   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:16.052840   92172 retry.go:31] will retry after 244.443843ms: waiting for machine to come up
	I1212 22:15:16.299381   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:16.299805   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:16.299833   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:16.299767   92172 retry.go:31] will retry after 374.923198ms: waiting for machine to come up
	I1212 22:15:16.676316   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:16.676747   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:16.676780   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:16.676678   92172 retry.go:31] will retry after 594.491313ms: waiting for machine to come up
	I1212 22:15:17.272548   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:17.272987   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:17.273016   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:17.272926   92172 retry.go:31] will retry after 464.577981ms: waiting for machine to come up
	I1212 22:15:17.739679   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:17.740297   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:17.740330   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:17.740240   92172 retry.go:31] will retry after 799.723209ms: waiting for machine to come up
	I1212 22:15:18.541392   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:18.541855   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:18.541892   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:18.541776   92172 retry.go:31] will retry after 798.281424ms: waiting for machine to come up
	I1212 22:15:19.341198   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:19.341603   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:19.341637   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:19.341551   92172 retry.go:31] will retry after 1.334182099s: waiting for machine to come up
	I1212 22:15:20.677957   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:20.678463   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:20.678495   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:20.678417   92172 retry.go:31] will retry after 1.133981676s: waiting for machine to come up
	I1212 22:15:21.813700   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:21.814091   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:21.814122   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:21.814039   92172 retry.go:31] will retry after 2.177344741s: waiting for machine to come up
	I1212 22:15:23.993223   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:23.993627   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:23.993652   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:23.993593   92172 retry.go:31] will retry after 2.832349962s: waiting for machine to come up
	I1212 22:15:26.828298   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:26.828706   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:26.828732   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:26.828651   92172 retry.go:31] will retry after 3.349484801s: waiting for machine to come up
	I1212 22:15:30.180437   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:30.180914   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:30.180945   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:30.180855   92172 retry.go:31] will retry after 3.920590616s: waiting for machine to come up
	I1212 22:15:34.102762   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:34.103081   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find current IP address of domain ingress-addon-legacy-220067 in network mk-ingress-addon-legacy-220067
	I1212 22:15:34.103111   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | I1212 22:15:34.103038   92172 retry.go:31] will retry after 5.03599514s: waiting for machine to come up
	I1212 22:15:39.140285   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.140648   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has current primary IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.140666   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Found IP for machine: 192.168.39.145
	I1212 22:15:39.140677   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Reserving static IP address...
	I1212 22:15:39.140993   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-220067", mac: "52:54:00:ba:28:78", ip: "192.168.39.145"} in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.216177   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Getting to WaitForSSH function...
	I1212 22:15:39.216214   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Reserved static IP address: 192.168.39.145
	I1212 22:15:39.216236   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Waiting for SSH to be available...
	I1212 22:15:39.219349   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.219755   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:39.219795   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.219936   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Using SSH client type: external
	I1212 22:15:39.219965   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa (-rw-------)
	I1212 22:15:39.219991   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 22:15:39.220008   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | About to run SSH command:
	I1212 22:15:39.220018   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | exit 0
	I1212 22:15:39.319058   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | SSH cmd err, output: <nil>: 
	I1212 22:15:39.319374   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) KVM machine creation complete!
	I1212 22:15:39.319748   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetConfigRaw
	I1212 22:15:39.320313   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:15:39.320510   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:15:39.320666   92149 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 22:15:39.320682   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetState
	I1212 22:15:39.322009   92149 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 22:15:39.322027   92149 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 22:15:39.322036   92149 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 22:15:39.322046   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:39.324489   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.324846   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:39.324882   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.324989   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:39.325123   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.325291   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.325410   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:39.325584   92149 main.go:141] libmachine: Using SSH client type: native
	I1212 22:15:39.325980   92149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 22:15:39.325994   92149 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 22:15:39.458447   92149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:15:39.458479   92149 main.go:141] libmachine: Detecting the provisioner...
	I1212 22:15:39.458494   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:39.461438   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.461798   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:39.461839   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.461951   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:39.462143   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.462291   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.462418   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:39.462589   92149 main.go:141] libmachine: Using SSH client type: native
	I1212 22:15:39.462905   92149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 22:15:39.462916   92149 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 22:15:39.596161   92149 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g161fa11-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 22:15:39.596248   92149 main.go:141] libmachine: found compatible host: buildroot
	I1212 22:15:39.596266   92149 main.go:141] libmachine: Provisioning with buildroot...
	I1212 22:15:39.596280   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetMachineName
	I1212 22:15:39.596556   92149 buildroot.go:166] provisioning hostname "ingress-addon-legacy-220067"
	I1212 22:15:39.596593   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetMachineName
	I1212 22:15:39.596808   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:39.599653   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.600038   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:39.600068   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.600235   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:39.600416   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.600549   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.600673   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:39.600807   92149 main.go:141] libmachine: Using SSH client type: native
	I1212 22:15:39.601150   92149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 22:15:39.601168   92149 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-220067 && echo "ingress-addon-legacy-220067" | sudo tee /etc/hostname
	I1212 22:15:39.744756   92149 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-220067
	
	I1212 22:15:39.744800   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:39.747758   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.748110   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:39.748144   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.748288   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:39.748483   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.748621   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.748746   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:39.748905   92149 main.go:141] libmachine: Using SSH client type: native
	I1212 22:15:39.749215   92149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 22:15:39.749233   92149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-220067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-220067/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-220067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:15:39.892281   92149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:15:39.892320   92149 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 22:15:39.892343   92149 buildroot.go:174] setting up certificates
	I1212 22:15:39.892354   92149 provision.go:83] configureAuth start
	I1212 22:15:39.892364   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetMachineName
	I1212 22:15:39.892691   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetIP
	I1212 22:15:39.895090   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.895501   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:39.895535   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.895675   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:39.897638   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.897938   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:39.897968   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.898086   92149 provision.go:138] copyHostCerts
	I1212 22:15:39.898119   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:15:39.898159   92149 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 22:15:39.898181   92149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:15:39.898273   92149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 22:15:39.898386   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:15:39.898414   92149 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 22:15:39.898425   92149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:15:39.898457   92149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 22:15:39.898509   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:15:39.898524   92149 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 22:15:39.898531   92149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:15:39.898554   92149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 22:15:39.898599   92149 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-220067 san=[192.168.39.145 192.168.39.145 localhost 127.0.0.1 minikube ingress-addon-legacy-220067]
	I1212 22:15:39.954528   92149 provision.go:172] copyRemoteCerts
	I1212 22:15:39.954596   92149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:15:39.954624   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:39.957183   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.957493   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:39.957517   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:39.957668   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:39.957887   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:39.958037   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:39.958161   92149 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa Username:docker}
	I1212 22:15:40.052395   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:15:40.052470   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:15:40.075784   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:15:40.075867   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:15:40.098594   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:15:40.098675   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 22:15:40.122612   92149 provision.go:86] duration metric: configureAuth took 230.24428ms
	I1212 22:15:40.122649   92149 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:15:40.122878   92149 config.go:182] Loaded profile config "ingress-addon-legacy-220067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 22:15:40.122981   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:40.125547   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.125989   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:40.126020   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.126192   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:40.126399   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:40.126543   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:40.126663   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:40.126818   92149 main.go:141] libmachine: Using SSH client type: native
	I1212 22:15:40.127202   92149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 22:15:40.127229   92149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:15:40.463799   92149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:15:40.463832   92149 main.go:141] libmachine: Checking connection to Docker...
	I1212 22:15:40.463846   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetURL
	I1212 22:15:40.465198   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Using libvirt version 6000000
	I1212 22:15:40.467326   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.467606   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:40.467642   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.467760   92149 main.go:141] libmachine: Docker is up and running!
	I1212 22:15:40.467775   92149 main.go:141] libmachine: Reticulating splines...
	I1212 22:15:40.467784   92149 client.go:171] LocalClient.Create took 26.481274254s
	I1212 22:15:40.467815   92149 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-220067" took 26.4813516s
	I1212 22:15:40.467829   92149 start.go:300] post-start starting for "ingress-addon-legacy-220067" (driver="kvm2")
	I1212 22:15:40.467843   92149 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:15:40.467871   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:15:40.468130   92149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:15:40.468165   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:40.470353   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.470653   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:40.470684   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.470780   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:40.470954   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:40.471125   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:40.471289   92149 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa Username:docker}
	I1212 22:15:40.569027   92149 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:15:40.573313   92149 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:15:40.573341   92149 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 22:15:40.573403   92149 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 22:15:40.573475   92149 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 22:15:40.573486   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /etc/ssl/certs/838252.pem
	I1212 22:15:40.573569   92149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:15:40.582780   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:15:40.607520   92149 start.go:303] post-start completed in 139.670771ms
	I1212 22:15:40.607577   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetConfigRaw
	I1212 22:15:40.633912   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetIP
	I1212 22:15:40.637019   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.637407   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:40.637440   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.637717   92149 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/config.json ...
	I1212 22:15:40.695852   92149 start.go:128] duration metric: createHost completed in 26.728337885s
	I1212 22:15:40.695905   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:40.698663   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.699021   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:40.699073   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.699223   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:40.699440   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:40.699640   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:40.699803   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:40.699981   92149 main.go:141] libmachine: Using SSH client type: native
	I1212 22:15:40.700326   92149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I1212 22:15:40.700343   92149 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:15:40.835921   92149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702419340.820181881
	
	I1212 22:15:40.835947   92149 fix.go:206] guest clock: 1702419340.820181881
	I1212 22:15:40.835955   92149 fix.go:219] Guest: 2023-12-12 22:15:40.820181881 +0000 UTC Remote: 2023-12-12 22:15:40.695882261 +0000 UTC m=+31.161875138 (delta=124.29962ms)
	I1212 22:15:40.835979   92149 fix.go:190] guest clock delta is within tolerance: 124.29962ms
	I1212 22:15:40.835983   92149 start.go:83] releasing machines lock for "ingress-addon-legacy-220067", held for 26.868603547s
	I1212 22:15:40.836006   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:15:40.836330   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetIP
	I1212 22:15:40.838959   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.839304   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:40.839341   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.839451   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:15:40.839999   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:15:40.840146   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:15:40.840230   92149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:15:40.840284   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:40.840347   92149 ssh_runner.go:195] Run: cat /version.json
	I1212 22:15:40.840372   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:15:40.842900   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.843119   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.843335   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:40.843358   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.843533   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:40.843543   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:40.843582   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:40.843704   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:40.843771   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:15:40.843882   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:40.843957   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:15:40.844019   92149 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa Username:docker}
	I1212 22:15:40.844056   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:15:40.844156   92149 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa Username:docker}
	I1212 22:15:40.946136   92149 ssh_runner.go:195] Run: systemctl --version
	I1212 22:15:40.970718   92149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:15:41.675502   92149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 22:15:41.681763   92149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:15:41.681831   92149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:15:41.697031   92149 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:15:41.697052   92149 start.go:475] detecting cgroup driver to use...
	I1212 22:15:41.697129   92149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:15:41.711085   92149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:15:41.724040   92149 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:15:41.724103   92149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:15:41.736172   92149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:15:41.748259   92149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:15:41.848305   92149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:15:41.971024   92149 docker.go:219] disabling docker service ...
	I1212 22:15:41.971108   92149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:15:41.985411   92149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:15:41.997008   92149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:15:42.108238   92149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:15:42.221967   92149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:15:42.234933   92149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:15:42.252449   92149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 22:15:42.252514   92149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:15:42.261636   92149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:15:42.261705   92149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:15:42.270807   92149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:15:42.279679   92149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:15:42.288838   92149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:15:42.298536   92149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:15:42.306401   92149 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:15:42.306462   92149 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 22:15:42.318480   92149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:15:42.328647   92149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:15:42.441053   92149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:15:42.611037   92149 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:15:42.611119   92149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:15:42.619687   92149 start.go:543] Will wait 60s for crictl version
	I1212 22:15:42.619759   92149 ssh_runner.go:195] Run: which crictl
	I1212 22:15:42.624003   92149 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:15:42.671210   92149 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 22:15:42.671314   92149 ssh_runner.go:195] Run: crio --version
	I1212 22:15:42.719021   92149 ssh_runner.go:195] Run: crio --version
	I1212 22:15:42.772664   92149 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1212 22:15:42.774187   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetIP
	I1212 22:15:42.777755   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:42.778179   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:15:42.778210   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:15:42.778400   92149 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 22:15:42.782902   92149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:15:42.795079   92149 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 22:15:42.795152   92149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:15:42.833886   92149 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 22:15:42.833967   92149 ssh_runner.go:195] Run: which lz4
	I1212 22:15:42.837922   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 22:15:42.838035   92149 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 22:15:42.842303   92149 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:15:42.842342   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1212 22:15:44.824033   92149 crio.go:444] Took 1.986033 seconds to copy over tarball
	I1212 22:15:44.824114   92149 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 22:15:48.018169   92149 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.19402384s)
	I1212 22:15:48.018197   92149 crio.go:451] Took 3.194138 seconds to extract the tarball
	I1212 22:15:48.018206   92149 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 22:15:48.064313   92149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:15:48.123948   92149 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 22:15:48.123981   92149 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 22:15:48.124045   92149 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:15:48.124074   92149 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 22:15:48.124104   92149 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 22:15:48.124131   92149 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:15:48.124151   92149 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:15:48.124225   92149 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1212 22:15:48.124071   92149 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:15:48.124114   92149 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:15:48.125552   92149 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 22:15:48.125580   92149 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:15:48.125603   92149 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:15:48.125621   92149 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 22:15:48.125605   92149 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1212 22:15:48.125553   92149 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:15:48.125555   92149 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:15:48.125552   92149 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:15:48.311929   92149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1212 22:15:48.321087   92149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:15:48.339890   92149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:15:48.346331   92149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1212 22:15:48.372489   92149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:15:48.377094   92149 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1212 22:15:48.377162   92149 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1212 22:15:48.377210   92149 ssh_runner.go:195] Run: which crictl
	I1212 22:15:48.377661   92149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:15:48.395016   92149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 22:15:48.419229   92149 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1212 22:15:48.419313   92149 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:15:48.419365   92149 ssh_runner.go:195] Run: which crictl
	I1212 22:15:48.445427   92149 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:15:48.512517   92149 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1212 22:15:48.512572   92149 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:15:48.512622   92149 ssh_runner.go:195] Run: which crictl
	I1212 22:15:48.512640   92149 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1212 22:15:48.512675   92149 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1212 22:15:48.512716   92149 ssh_runner.go:195] Run: which crictl
	I1212 22:15:48.512717   92149 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1212 22:15:48.512739   92149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1212 22:15:48.512743   92149 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:15:48.512780   92149 ssh_runner.go:195] Run: which crictl
	I1212 22:15:48.521384   92149 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1212 22:15:48.521446   92149 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:15:48.521495   92149 ssh_runner.go:195] Run: which crictl
	I1212 22:15:48.533502   92149 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 22:15:48.533551   92149 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 22:15:48.533576   92149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:15:48.533588   92149 ssh_runner.go:195] Run: which crictl
	I1212 22:15:48.614748   92149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:15:48.614821   92149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:15:48.614847   92149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1212 22:15:48.614960   92149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1212 22:15:48.614962   92149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:15:48.615002   92149 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 22:15:48.615070   92149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1212 22:15:48.731100   92149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1212 22:15:48.731179   92149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1212 22:15:48.737128   92149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1212 22:15:48.737213   92149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1212 22:15:48.737246   92149 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 22:15:48.737301   92149 cache_images.go:92] LoadImages completed in 613.298698ms
	W1212 22:15:48.737375   92149 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1212 22:15:48.737450   92149 ssh_runner.go:195] Run: crio config
	I1212 22:15:48.802931   92149 cni.go:84] Creating CNI manager for ""
	I1212 22:15:48.802952   92149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:15:48.802971   92149 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:15:48.803004   92149 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-220067 NodeName:ingress-addon-legacy-220067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 22:15:48.803135   92149 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-220067"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:15:48.803223   92149 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-220067 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-220067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:15:48.803303   92149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1212 22:15:48.813569   92149 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:15:48.813633   92149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:15:48.823405   92149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I1212 22:15:48.839722   92149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1212 22:15:48.855951   92149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I1212 22:15:48.872290   92149 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I1212 22:15:48.876247   92149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:15:48.888234   92149 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067 for IP: 192.168.39.145
	I1212 22:15:48.888271   92149 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:15:48.888435   92149 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 22:15:48.888491   92149 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 22:15:48.888550   92149 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.key
	I1212 22:15:48.888569   92149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt with IP's: []
	I1212 22:15:48.973455   92149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt ...
	I1212 22:15:48.973491   92149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: {Name:mka325cc51a76c6137c8234edcbd2c7dfd668c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:15:48.973689   92149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.key ...
	I1212 22:15:48.973709   92149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.key: {Name:mk13321a0dd4d2ddd4862b135fea3cbbdeebf587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:15:48.973822   92149 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.key.c05e0d2e
	I1212 22:15:48.973852   92149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.crt.c05e0d2e with IP's: [192.168.39.145 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:15:49.083913   92149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.crt.c05e0d2e ...
	I1212 22:15:49.083948   92149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.crt.c05e0d2e: {Name:mk0dba662d6f7330a94657ab62908036d8f71603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:15:49.084151   92149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.key.c05e0d2e ...
	I1212 22:15:49.084171   92149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.key.c05e0d2e: {Name:mk4dfa866c267f7a5cb1fa842fbe005c6bff0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:15:49.084271   92149 certs.go:337] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.crt.c05e0d2e -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.crt
	I1212 22:15:49.084391   92149 certs.go:341] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.key.c05e0d2e -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.key
	I1212 22:15:49.084468   92149 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.key
	I1212 22:15:49.084485   92149 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.crt with IP's: []
	I1212 22:15:49.407611   92149 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.crt ...
	I1212 22:15:49.407645   92149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.crt: {Name:mk94bea9b672f5eedbf6b31e1eab1ab62672f464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:15:49.407826   92149 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.key ...
	I1212 22:15:49.407843   92149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.key: {Name:mka748f272d57271c3c6e39dde93d701d7741cb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:15:49.407931   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 22:15:49.407952   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 22:15:49.407970   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 22:15:49.407985   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 22:15:49.408004   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:15:49.408024   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:15:49.408035   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:15:49.408046   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:15:49.408101   92149 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 22:15:49.408146   92149 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 22:15:49.408158   92149 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 22:15:49.408181   92149 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:15:49.408212   92149 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:15:49.408238   92149 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 22:15:49.408285   92149 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:15:49.408319   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem -> /usr/share/ca-certificates/83825.pem
	I1212 22:15:49.408334   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /usr/share/ca-certificates/838252.pem
	I1212 22:15:49.408349   92149 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:15:49.409033   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:15:49.434808   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 22:15:49.457989   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:15:49.481380   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:15:49.506148   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:15:49.530207   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 22:15:49.554436   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:15:49.577614   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 22:15:49.600609   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 22:15:49.701057   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 22:15:49.724364   92149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:15:49.747999   92149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:15:49.764158   92149 ssh_runner.go:195] Run: openssl version
	I1212 22:15:49.769952   92149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:15:49.781962   92149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:15:49.787101   92149 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:15:49.787180   92149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:15:49.792766   92149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:15:49.803072   92149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 22:15:49.813802   92149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 22:15:49.818334   92149 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:15:49.818380   92149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 22:15:49.824024   92149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 22:15:49.834080   92149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 22:15:49.844708   92149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 22:15:49.849270   92149 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:15:49.849318   92149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 22:15:49.854883   92149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:15:49.865054   92149 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:15:49.869164   92149 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:15:49.869216   92149 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-220067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-220067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:15:49.869330   92149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:15:49.869382   92149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:15:49.905161   92149 cri.go:89] found id: ""
	I1212 22:15:49.905229   92149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:15:49.914955   92149 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:15:49.923847   92149 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:15:49.932789   92149 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:15:49.932837   92149 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1212 22:15:49.988518   92149 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 22:15:49.988620   92149 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:15:50.115693   92149 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:15:50.115859   92149 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:15:50.116062   92149 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:15:50.334712   92149 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:15:50.334815   92149 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:15:50.334854   92149 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:15:50.461993   92149 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:15:50.464989   92149 out.go:204]   - Generating certificates and keys ...
	I1212 22:15:50.465084   92149 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:15:50.465163   92149 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:15:50.553761   92149 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:15:50.677273   92149 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:15:50.749761   92149 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:15:50.896487   92149 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:15:51.076742   92149 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:15:51.076936   92149 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-220067 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I1212 22:15:51.476591   92149 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:15:51.476932   92149 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-220067 localhost] and IPs [192.168.39.145 127.0.0.1 ::1]
	I1212 22:15:51.646441   92149 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:15:51.768323   92149 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:15:51.846522   92149 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:15:51.846781   92149 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:15:51.902307   92149 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:15:52.140921   92149 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:15:52.244370   92149 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:15:52.539799   92149 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:15:52.540648   92149 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:15:52.542743   92149 out.go:204]   - Booting up control plane ...
	I1212 22:15:52.542857   92149 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:15:52.546491   92149 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:15:52.548022   92149 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:15:52.550749   92149 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:15:52.552730   92149 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:16:01.555839   92149 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003563 seconds
	I1212 22:16:01.555956   92149 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:16:01.571635   92149 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:16:02.094547   92149 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:16:02.094745   92149 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-220067 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 22:16:02.603833   92149 kubeadm.go:322] [bootstrap-token] Using token: 9dldb4.i4rkkgtyiq7a5e11
	I1212 22:16:02.605457   92149 out.go:204]   - Configuring RBAC rules ...
	I1212 22:16:02.605573   92149 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:16:02.613328   92149 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:16:02.627998   92149 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:16:02.637530   92149 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:16:02.641303   92149 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:16:02.645583   92149 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:16:02.657818   92149 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:16:02.947985   92149 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:16:03.023927   92149 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:16:03.023954   92149 kubeadm.go:322] 
	I1212 22:16:03.024032   92149 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:16:03.024045   92149 kubeadm.go:322] 
	I1212 22:16:03.024146   92149 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:16:03.024160   92149 kubeadm.go:322] 
	I1212 22:16:03.024193   92149 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:16:03.024304   92149 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:16:03.024390   92149 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:16:03.024403   92149 kubeadm.go:322] 
	I1212 22:16:03.024491   92149 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:16:03.024624   92149 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:16:03.024735   92149 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:16:03.024752   92149 kubeadm.go:322] 
	I1212 22:16:03.024875   92149 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:16:03.024982   92149 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:16:03.025005   92149 kubeadm.go:322] 
	I1212 22:16:03.025113   92149 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9dldb4.i4rkkgtyiq7a5e11 \
	I1212 22:16:03.025243   92149 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 22:16:03.025284   92149 kubeadm.go:322]     --control-plane 
	I1212 22:16:03.025293   92149 kubeadm.go:322] 
	I1212 22:16:03.025392   92149 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:16:03.025403   92149 kubeadm.go:322] 
	I1212 22:16:03.025523   92149 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9dldb4.i4rkkgtyiq7a5e11 \
	I1212 22:16:03.025633   92149 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 22:16:03.025784   92149 kubeadm.go:322] W1212 22:15:49.981855     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 22:16:03.025924   92149 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:16:03.026048   92149 kubeadm.go:322] W1212 22:15:52.541079     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 22:16:03.026196   92149 kubeadm.go:322] W1212 22:15:52.542697     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 22:16:03.026220   92149 cni.go:84] Creating CNI manager for ""
	I1212 22:16:03.026231   92149 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:16:03.028362   92149 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 22:16:03.029821   92149 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 22:16:03.048347   92149 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 22:16:03.069386   92149 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:16:03.069508   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:03.069543   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=ingress-addon-legacy-220067 minikube.k8s.io/updated_at=2023_12_12T22_16_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:03.101519   92149 ops.go:34] apiserver oom_adj: -16
	I1212 22:16:03.247142   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:03.437966   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:04.083672   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:04.584570   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:05.083792   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:05.584430   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:06.084645   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:06.584435   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:07.084334   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:07.583954   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:08.084686   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:08.584447   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:09.083677   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:09.584080   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:10.084566   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:10.584653   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:11.084060   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:11.583702   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:12.084013   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:12.583883   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:13.084084   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:13.583813   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:14.084140   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:14.583985   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:15.084306   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:15.584473   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:16.084382   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:16.583985   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:17.084512   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:17.583871   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:18.084540   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:18.584491   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:19.083739   92149 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:16:19.224248   92149 kubeadm.go:1088] duration metric: took 16.154826838s to wait for elevateKubeSystemPrivileges.
	I1212 22:16:19.224297   92149 kubeadm.go:406] StartCluster complete in 29.355085902s
	I1212 22:16:19.224322   92149 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:16:19.224425   92149 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:16:19.225362   92149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:16:19.225629   92149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:16:19.225691   92149 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 22:16:19.225778   92149 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-220067"
	I1212 22:16:19.225799   92149 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-220067"
	I1212 22:16:19.225803   92149 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-220067"
	I1212 22:16:19.225824   92149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-220067"
	I1212 22:16:19.225851   92149 config.go:182] Loaded profile config "ingress-addon-legacy-220067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 22:16:19.225878   92149 host.go:66] Checking if "ingress-addon-legacy-220067" exists ...
	I1212 22:16:19.226329   92149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:16:19.226344   92149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:16:19.226361   92149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:16:19.226368   92149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:16:19.226426   92149 kapi.go:59] client config for ingress-addon-legacy-220067: &rest.Config{Host:"https://192.168.39.145:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uin
t8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:16:19.227266   92149 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 22:16:19.242675   92149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I1212 22:16:19.242837   92149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39515
	I1212 22:16:19.243121   92149 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:16:19.243277   92149 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:16:19.243665   92149 main.go:141] libmachine: Using API Version  1
	I1212 22:16:19.243682   92149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:16:19.243821   92149 main.go:141] libmachine: Using API Version  1
	I1212 22:16:19.243847   92149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:16:19.244036   92149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:16:19.244182   92149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:16:19.244365   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetState
	I1212 22:16:19.244622   92149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:16:19.244659   92149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:16:19.247048   92149 kapi.go:59] client config for ingress-addon-legacy-220067: &rest.Config{Host:"https://192.168.39.145:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uin
t8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:16:19.247438   92149 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-220067"
	I1212 22:16:19.247487   92149 host.go:66] Checking if "ingress-addon-legacy-220067" exists ...
	I1212 22:16:19.247928   92149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:16:19.247977   92149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:16:19.259261   92149 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-220067" context rescaled to 1 replicas
	I1212 22:16:19.259313   92149 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:16:19.261059   92149 out.go:177] * Verifying Kubernetes components...
	I1212 22:16:19.260965   92149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I1212 22:16:19.262621   92149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:16:19.261539   92149 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:16:19.262704   92149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1212 22:16:19.263271   92149 main.go:141] libmachine: Using API Version  1
	I1212 22:16:19.263296   92149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:16:19.263337   92149 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:16:19.263662   92149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:16:19.263830   92149 main.go:141] libmachine: Using API Version  1
	I1212 22:16:19.263850   92149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:16:19.263851   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetState
	I1212 22:16:19.264207   92149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:16:19.264773   92149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:16:19.264808   92149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:16:19.265672   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:16:19.267617   92149 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:16:19.269169   92149 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:16:19.269193   92149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:16:19.269215   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:16:19.271966   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:16:19.272417   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:16:19.272448   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:16:19.272630   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:16:19.272812   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:16:19.272981   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:16:19.273100   92149 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa Username:docker}
	I1212 22:16:19.281426   92149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33399
	I1212 22:16:19.281847   92149 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:16:19.282349   92149 main.go:141] libmachine: Using API Version  1
	I1212 22:16:19.282370   92149 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:16:19.282668   92149 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:16:19.282872   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetState
	I1212 22:16:19.284388   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .DriverName
	I1212 22:16:19.284636   92149 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:16:19.284652   92149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:16:19.284667   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHHostname
	I1212 22:16:19.287296   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:16:19.287733   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:28:78", ip: ""} in network mk-ingress-addon-legacy-220067: {Iface:virbr1 ExpiryTime:2023-12-12 23:15:30 +0000 UTC Type:0 Mac:52:54:00:ba:28:78 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ingress-addon-legacy-220067 Clientid:01:52:54:00:ba:28:78}
	I1212 22:16:19.287792   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | domain ingress-addon-legacy-220067 has defined IP address 192.168.39.145 and MAC address 52:54:00:ba:28:78 in network mk-ingress-addon-legacy-220067
	I1212 22:16:19.288010   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHPort
	I1212 22:16:19.288167   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHKeyPath
	I1212 22:16:19.288371   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .GetSSHUsername
	I1212 22:16:19.288457   92149 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/ingress-addon-legacy-220067/id_rsa Username:docker}
	I1212 22:16:19.390983   92149 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:16:19.391367   92149 kapi.go:59] client config for ingress-addon-legacy-220067: &rest.Config{Host:"https://192.168.39.145:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uin
t8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:16:19.391687   92149 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-220067" to be "Ready" ...
	I1212 22:16:19.440937   92149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:16:19.452823   92149 node_ready.go:49] node "ingress-addon-legacy-220067" has status "Ready":"True"
	I1212 22:16:19.452858   92149 node_ready.go:38] duration metric: took 61.148553ms waiting for node "ingress-addon-legacy-220067" to be "Ready" ...
	I1212 22:16:19.452871   92149 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:16:19.461393   92149 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-g7fpf" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:19.491030   92149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:16:20.048474   92149 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 22:16:20.178192   92149 main.go:141] libmachine: Making call to close driver server
	I1212 22:16:20.178230   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .Close
	I1212 22:16:20.178267   92149 main.go:141] libmachine: Making call to close driver server
	I1212 22:16:20.178289   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .Close
	I1212 22:16:20.178643   92149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:16:20.178694   92149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:16:20.178713   92149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:16:20.178714   92149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:16:20.178726   92149 main.go:141] libmachine: Making call to close driver server
	I1212 22:16:20.178733   92149 main.go:141] libmachine: Making call to close driver server
	I1212 22:16:20.178740   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .Close
	I1212 22:16:20.178751   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .Close
	I1212 22:16:20.178673   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Closing plugin on server side
	I1212 22:16:20.178660   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) DBG | Closing plugin on server side
	I1212 22:16:20.178978   92149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:16:20.178991   92149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:16:20.179002   92149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:16:20.179017   92149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:16:20.195213   92149 main.go:141] libmachine: Making call to close driver server
	I1212 22:16:20.195266   92149 main.go:141] libmachine: (ingress-addon-legacy-220067) Calling .Close
	I1212 22:16:20.195563   92149 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:16:20.195579   92149 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:16:20.197446   92149 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 22:16:20.198678   92149 addons.go:502] enable addons completed in 972.991399ms: enabled=[storage-provisioner default-storageclass]
	I1212 22:16:21.537609   92149 pod_ready.go:102] pod "coredns-66bff467f8-g7fpf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:16:19 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 22:16:24.032611   92149 pod_ready.go:102] pod "coredns-66bff467f8-g7fpf" in "kube-system" namespace has status "Ready":"False"
	I1212 22:16:25.531107   92149 pod_ready.go:92] pod "coredns-66bff467f8-g7fpf" in "kube-system" namespace has status "Ready":"True"
	I1212 22:16:25.531134   92149 pod_ready.go:81] duration metric: took 6.069699844s waiting for pod "coredns-66bff467f8-g7fpf" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.531143   92149 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-220067" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.536678   92149 pod_ready.go:92] pod "etcd-ingress-addon-legacy-220067" in "kube-system" namespace has status "Ready":"True"
	I1212 22:16:25.536701   92149 pod_ready.go:81] duration metric: took 5.551155ms waiting for pod "etcd-ingress-addon-legacy-220067" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.536710   92149 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-220067" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.541702   92149 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-220067" in "kube-system" namespace has status "Ready":"True"
	I1212 22:16:25.541724   92149 pod_ready.go:81] duration metric: took 5.008096ms waiting for pod "kube-apiserver-ingress-addon-legacy-220067" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.541732   92149 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-220067" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.546366   92149 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-220067" in "kube-system" namespace has status "Ready":"True"
	I1212 22:16:25.546389   92149 pod_ready.go:81] duration metric: took 4.647588ms waiting for pod "kube-controller-manager-ingress-addon-legacy-220067" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.546397   92149 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-220067" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.551319   92149 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-220067" in "kube-system" namespace has status "Ready":"True"
	I1212 22:16:25.551351   92149 pod_ready.go:81] duration metric: took 4.942945ms waiting for pod "kube-scheduler-ingress-addon-legacy-220067" in "kube-system" namespace to be "Ready" ...
	I1212 22:16:25.551362   92149 pod_ready.go:38] duration metric: took 6.098478579s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:16:25.551419   92149 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:16:25.551480   92149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:16:25.563695   92149 api_server.go:72] duration metric: took 6.304339937s to wait for apiserver process to appear ...
	I1212 22:16:25.563722   92149 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:16:25.563741   92149 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I1212 22:16:25.569118   92149 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I1212 22:16:25.570361   92149 api_server.go:141] control plane version: v1.18.20
	I1212 22:16:25.570384   92149 api_server.go:131] duration metric: took 6.656339ms to wait for apiserver health ...
	I1212 22:16:25.570393   92149 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:16:25.724753   92149 request.go:629] Waited for 154.267767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I1212 22:16:25.732014   92149 system_pods.go:59] 7 kube-system pods found
	I1212 22:16:25.732045   92149 system_pods.go:61] "coredns-66bff467f8-g7fpf" [7c4678b0-40f1-421e-ac87-e9e47bd79710] Running
	I1212 22:16:25.732050   92149 system_pods.go:61] "etcd-ingress-addon-legacy-220067" [7de07e68-2f9f-4cad-a605-0ea8801d5569] Running
	I1212 22:16:25.732055   92149 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-220067" [3d2bc183-0fe2-425d-9ad0-8a890f570dc4] Running
	I1212 22:16:25.732059   92149 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-220067" [f92b9eae-9591-4ec8-9de4-3dc5dd2c112d] Running
	I1212 22:16:25.732063   92149 system_pods.go:61] "kube-proxy-th7h9" [75d2e0a6-469d-4810-a7e9-48e865f4621e] Running
	I1212 22:16:25.732067   92149 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-220067" [aacee307-db58-4583-ae23-1f724bd2075a] Running
	I1212 22:16:25.732074   92149 system_pods.go:61] "storage-provisioner" [ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4] Running
	I1212 22:16:25.732082   92149 system_pods.go:74] duration metric: took 161.68346ms to wait for pod list to return data ...
	I1212 22:16:25.732091   92149 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:16:25.925540   92149 request.go:629] Waited for 193.357286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/default/serviceaccounts
	I1212 22:16:25.929095   92149 default_sa.go:45] found service account: "default"
	I1212 22:16:25.929124   92149 default_sa.go:55] duration metric: took 197.026467ms for default service account to be created ...
	I1212 22:16:25.929132   92149 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:16:26.125572   92149 request.go:629] Waited for 196.362864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/namespaces/kube-system/pods
	I1212 22:16:26.131562   92149 system_pods.go:86] 7 kube-system pods found
	I1212 22:16:26.131592   92149 system_pods.go:89] "coredns-66bff467f8-g7fpf" [7c4678b0-40f1-421e-ac87-e9e47bd79710] Running
	I1212 22:16:26.131597   92149 system_pods.go:89] "etcd-ingress-addon-legacy-220067" [7de07e68-2f9f-4cad-a605-0ea8801d5569] Running
	I1212 22:16:26.131601   92149 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-220067" [3d2bc183-0fe2-425d-9ad0-8a890f570dc4] Running
	I1212 22:16:26.131606   92149 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-220067" [f92b9eae-9591-4ec8-9de4-3dc5dd2c112d] Running
	I1212 22:16:26.131616   92149 system_pods.go:89] "kube-proxy-th7h9" [75d2e0a6-469d-4810-a7e9-48e865f4621e] Running
	I1212 22:16:26.131627   92149 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-220067" [aacee307-db58-4583-ae23-1f724bd2075a] Running
	I1212 22:16:26.131630   92149 system_pods.go:89] "storage-provisioner" [ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4] Running
	I1212 22:16:26.131639   92149 system_pods.go:126] duration metric: took 202.500055ms to wait for k8s-apps to be running ...
	I1212 22:16:26.131649   92149 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:16:26.131700   92149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:16:26.144476   92149 system_svc.go:56] duration metric: took 12.816703ms WaitForService to wait for kubelet.
	I1212 22:16:26.144502   92149 kubeadm.go:581] duration metric: took 6.885154923s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:16:26.144519   92149 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:16:26.324881   92149 request.go:629] Waited for 180.262716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.145:8443/api/v1/nodes
	I1212 22:16:26.329234   92149 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:16:26.329262   92149 node_conditions.go:123] node cpu capacity is 2
	I1212 22:16:26.329273   92149 node_conditions.go:105] duration metric: took 184.749367ms to run NodePressure ...
	I1212 22:16:26.329286   92149 start.go:228] waiting for startup goroutines ...
	I1212 22:16:26.329293   92149 start.go:233] waiting for cluster config update ...
	I1212 22:16:26.329304   92149 start.go:242] writing updated cluster config ...
	I1212 22:16:26.329581   92149 ssh_runner.go:195] Run: rm -f paused
	I1212 22:16:26.381126   92149 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1212 22:16:26.383284   92149 out.go:177] 
	W1212 22:16:26.385029   92149 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1212 22:16:26.386572   92149 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1212 22:16:26.388095   92149 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-220067" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 22:15:26 UTC, ends at Tue 2023-12-12 22:19:28 UTC. --
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.194758439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702419568194740089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=ebf3e0dc-8310-41a7-9ba5-07de78e283bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.195397566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b346c7e4-df54-4108-beca-6858fd51a8ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.195475921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b346c7e4-df54-4108-beca-6858fd51a8ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.195765256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7660419f1bea649dc7b44a5adf87be7343653d2097c2f7de212b1c7aa6aff46f,PodSandboxId:26266ed973bd3a5399774c5bf992f2e4b1ffd6e9d01626574c5d4bdd03e0673f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702419560026743255,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-zrck8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42e57c61-6ecb-4aa9-abc8-6781bed985b5,},Annotations:map[string]string{io.kubernetes.container.hash: acfc4782,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c1ddba802999ae331fa430dc935b3a8876a2bbf1343cfb87cb5a554757a939,PodSandboxId:8e48f9c80200944f4071d3e7c808e90847502bfccab1b52dbb8fc8e3f66fc4b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702419417390578949,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e4fd974-1e2b-49ab-9b6c-d12e26372b63,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e727de76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c432c7b9b0952d269f6a517ba8a06d1e81e6a567c36d8fbb070b167cd6f1d1,PodSandboxId:17d25d1c1a7eea1a4971ccd7462125ddf5f3d4ce8e378b9be531ef3e8f95e55b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702419412142296296,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4,},Annotations:map[string]string{io.kubernetes.container.hash: fbe04732,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4750ea6c0a8f099ce470cdb5d61010417fe5f10ff9adae5be95e195699ebadae,PodSandboxId:5d581138c72effe937699d561487822c862e950198785c54ff0e6cc4e1381d3e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1702419398512069115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t7v4x,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 988eec5c-0905-424a-bf26-59590bdc5a36,},Annotations:map[string]string{io.kubernetes.container.hash: 6c9ad90e,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0153ec8d8c066a60c1bbca17166d79a7a3807f80f8bec3a03cb1a8767b7664ad,PodSandboxId:4f4509614d9f55715931ab0ec93d0062cbfc81663c2854a92877c1639b29774a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702419389524595483,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twkt7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 84d2cfa5-5883-4bc8-b5d0-5cf00f3b5b87,},Annotations:map[string]string{io.kubernetes.container.hash: 7a0f2456,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1af5a6aef503889b5da2c2c3df17b008a66e200205c187df5c0753c0c96b84,PodSandboxId:7131b67fdaf87529f36661962b3923309476f0d5a954fba369653139e38539f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702419389378797200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4xsg7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3e341a85-6b16-4b31-83a8-23c72b9b43c9,},Annotations:map[string]string{io.kubernetes.container.hash: 39480fb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00c8ad44bde54b9ebc44166efdb5a75edfd41f81e36b23c7cfbd30ffc285da4,PodSandboxId:e37c541af3d4f809c460905e6e6196eac2dcd6f6ae29ad755040c10c02755d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702419382428268972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-g7fpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c4678b0-40f1-421e-ac87-e9e47bd79710,},Annotations:map[string]string{io.kubernetes.container.hash: a493d575,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e472356d999c5b1652115529097
3808769c7accb4a315a7815222dc3645169f,PodSandboxId:f0fd97a895a537c03a38bb4db209d83bfae7c5887097933b247f3631d81ac49e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702419381919577616,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th7h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d2e0a6-469d-4810-a7e9-48e865f4621e,},Annotations:map[string]string{io.kubernetes.container.hash: f2d4638d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7be0d734c3c34539ab43aa8b1472460bd26231a0516ba8a9932c40deeb672b5,Pod
SandboxId:17d25d1c1a7eea1a4971ccd7462125ddf5f3d4ce8e378b9be531ef3e8f95e55b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702419381552706767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4,},Annotations:map[string]string{io.kubernetes.container.hash: fbe04732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5433b6543e5295f90a052e210fa4e968f477b002c6404e9bc1a1d7c148467,PodSa
ndboxId:cbeb0927dbcab441d253687ce5511208d4924eb4560b35f3252a52c9f3412957,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702419356284416915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba00d7476bfb5476c3ead87f78dd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2939cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f76324b7410131cec8907c259cbda54cab218ae7f870da6a11fff408031cfb76,PodSandboxId:498c6d9cdc29e93fd1477751b6fe015a3e1e45
93453c81e6f299e47ddc3b28bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702419354798338799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c4458edef971112e5bd646bdebed0d765d118c780b517c8616d2338c5ce060,PodSandboxId:18d314e482126305f28d4ae24f50caa9aa169489ef7f
d2a6d489e63906049acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702419354816843824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13baa2892fc1545a5ef788ef83536aea99c28ad90ce500722a5fbf0a49aeac5e,PodSandboxId:2b788a8753b6a3
35bb9179e0a55d4e48d54cd8924a0667d14b5a927224a77cf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702419354530312151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71dbb680de839b3d2b5b4bc8c3796902,},Annotations:map[string]string{io.kubernetes.container.hash: d4901da9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b346c7e4-df54-4108-beca-6858fd51a8ae name=/runtime.v1.RuntimeServi
ce/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.237740312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6e8b31ee-291a-4b48-a518-ee2373442aae name=/runtime.v1.RuntimeService/Version
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.237800612Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6e8b31ee-291a-4b48-a518-ee2373442aae name=/runtime.v1.RuntimeService/Version
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.239304962Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=45a40bfe-ffa1-49bc-a8a0-5aacf40a2ac8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.239781824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702419568239768251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=45a40bfe-ffa1-49bc-a8a0-5aacf40a2ac8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.240354141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=740de0a6-aeae-4bd8-8f58-62a20e8b5495 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.240434439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=740de0a6-aeae-4bd8-8f58-62a20e8b5495 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.240716482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7660419f1bea649dc7b44a5adf87be7343653d2097c2f7de212b1c7aa6aff46f,PodSandboxId:26266ed973bd3a5399774c5bf992f2e4b1ffd6e9d01626574c5d4bdd03e0673f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702419560026743255,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-zrck8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42e57c61-6ecb-4aa9-abc8-6781bed985b5,},Annotations:map[string]string{io.kubernetes.container.hash: acfc4782,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c1ddba802999ae331fa430dc935b3a8876a2bbf1343cfb87cb5a554757a939,PodSandboxId:8e48f9c80200944f4071d3e7c808e90847502bfccab1b52dbb8fc8e3f66fc4b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702419417390578949,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e4fd974-1e2b-49ab-9b6c-d12e26372b63,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e727de76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c432c7b9b0952d269f6a517ba8a06d1e81e6a567c36d8fbb070b167cd6f1d1,PodSandboxId:17d25d1c1a7eea1a4971ccd7462125ddf5f3d4ce8e378b9be531ef3e8f95e55b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702419412142296296,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4,},Annotations:map[string]string{io.kubernetes.container.hash: fbe04732,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4750ea6c0a8f099ce470cdb5d61010417fe5f10ff9adae5be95e195699ebadae,PodSandboxId:5d581138c72effe937699d561487822c862e950198785c54ff0e6cc4e1381d3e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1702419398512069115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t7v4x,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 988eec5c-0905-424a-bf26-59590bdc5a36,},Annotations:map[string]string{io.kubernetes.container.hash: 6c9ad90e,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0153ec8d8c066a60c1bbca17166d79a7a3807f80f8bec3a03cb1a8767b7664ad,PodSandboxId:4f4509614d9f55715931ab0ec93d0062cbfc81663c2854a92877c1639b29774a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702419389524595483,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twkt7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 84d2cfa5-5883-4bc8-b5d0-5cf00f3b5b87,},Annotations:map[string]string{io.kubernetes.container.hash: 7a0f2456,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1af5a6aef503889b5da2c2c3df17b008a66e200205c187df5c0753c0c96b84,PodSandboxId:7131b67fdaf87529f36661962b3923309476f0d5a954fba369653139e38539f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702419389378797200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4xsg7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3e341a85-6b16-4b31-83a8-23c72b9b43c9,},Annotations:map[string]string{io.kubernetes.container.hash: 39480fb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00c8ad44bde54b9ebc44166efdb5a75edfd41f81e36b23c7cfbd30ffc285da4,PodSandboxId:e37c541af3d4f809c460905e6e6196eac2dcd6f6ae29ad755040c10c02755d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702419382428268972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-g7fpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c4678b0-40f1-421e-ac87-e9e47bd79710,},Annotations:map[string]string{io.kubernetes.container.hash: a493d575,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e472356d999c5b1652115529097
3808769c7accb4a315a7815222dc3645169f,PodSandboxId:f0fd97a895a537c03a38bb4db209d83bfae7c5887097933b247f3631d81ac49e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702419381919577616,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th7h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d2e0a6-469d-4810-a7e9-48e865f4621e,},Annotations:map[string]string{io.kubernetes.container.hash: f2d4638d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7be0d734c3c34539ab43aa8b1472460bd26231a0516ba8a9932c40deeb672b5,Pod
SandboxId:17d25d1c1a7eea1a4971ccd7462125ddf5f3d4ce8e378b9be531ef3e8f95e55b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702419381552706767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4,},Annotations:map[string]string{io.kubernetes.container.hash: fbe04732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5433b6543e5295f90a052e210fa4e968f477b002c6404e9bc1a1d7c148467,PodSa
ndboxId:cbeb0927dbcab441d253687ce5511208d4924eb4560b35f3252a52c9f3412957,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702419356284416915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba00d7476bfb5476c3ead87f78dd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2939cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f76324b7410131cec8907c259cbda54cab218ae7f870da6a11fff408031cfb76,PodSandboxId:498c6d9cdc29e93fd1477751b6fe015a3e1e45
93453c81e6f299e47ddc3b28bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702419354798338799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c4458edef971112e5bd646bdebed0d765d118c780b517c8616d2338c5ce060,PodSandboxId:18d314e482126305f28d4ae24f50caa9aa169489ef7f
d2a6d489e63906049acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702419354816843824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13baa2892fc1545a5ef788ef83536aea99c28ad90ce500722a5fbf0a49aeac5e,PodSandboxId:2b788a8753b6a3
35bb9179e0a55d4e48d54cd8924a0667d14b5a927224a77cf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702419354530312151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71dbb680de839b3d2b5b4bc8c3796902,},Annotations:map[string]string{io.kubernetes.container.hash: d4901da9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=740de0a6-aeae-4bd8-8f58-62a20e8b5495 name=/runtime.v1.RuntimeServi
ce/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.280517837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=03f60d3e-3f6b-4110-900e-8f4b589b14f7 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.280611380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=03f60d3e-3f6b-4110-900e-8f4b589b14f7 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.282282880Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3658207f-ea2e-4f00-b0d0-00158e14808e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.282776103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702419568282762377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=3658207f-ea2e-4f00-b0d0-00158e14808e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.283390891Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=369ae230-273b-4a08-97a2-465250e65eca name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.283462413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=369ae230-273b-4a08-97a2-465250e65eca name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.283752391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7660419f1bea649dc7b44a5adf87be7343653d2097c2f7de212b1c7aa6aff46f,PodSandboxId:26266ed973bd3a5399774c5bf992f2e4b1ffd6e9d01626574c5d4bdd03e0673f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702419560026743255,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-zrck8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42e57c61-6ecb-4aa9-abc8-6781bed985b5,},Annotations:map[string]string{io.kubernetes.container.hash: acfc4782,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c1ddba802999ae331fa430dc935b3a8876a2bbf1343cfb87cb5a554757a939,PodSandboxId:8e48f9c80200944f4071d3e7c808e90847502bfccab1b52dbb8fc8e3f66fc4b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702419417390578949,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e4fd974-1e2b-49ab-9b6c-d12e26372b63,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e727de76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c432c7b9b0952d269f6a517ba8a06d1e81e6a567c36d8fbb070b167cd6f1d1,PodSandboxId:17d25d1c1a7eea1a4971ccd7462125ddf5f3d4ce8e378b9be531ef3e8f95e55b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702419412142296296,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4,},Annotations:map[string]string{io.kubernetes.container.hash: fbe04732,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4750ea6c0a8f099ce470cdb5d61010417fe5f10ff9adae5be95e195699ebadae,PodSandboxId:5d581138c72effe937699d561487822c862e950198785c54ff0e6cc4e1381d3e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1702419398512069115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t7v4x,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 988eec5c-0905-424a-bf26-59590bdc5a36,},Annotations:map[string]string{io.kubernetes.container.hash: 6c9ad90e,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0153ec8d8c066a60c1bbca17166d79a7a3807f80f8bec3a03cb1a8767b7664ad,PodSandboxId:4f4509614d9f55715931ab0ec93d0062cbfc81663c2854a92877c1639b29774a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702419389524595483,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twkt7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 84d2cfa5-5883-4bc8-b5d0-5cf00f3b5b87,},Annotations:map[string]string{io.kubernetes.container.hash: 7a0f2456,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1af5a6aef503889b5da2c2c3df17b008a66e200205c187df5c0753c0c96b84,PodSandboxId:7131b67fdaf87529f36661962b3923309476f0d5a954fba369653139e38539f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702419389378797200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4xsg7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3e341a85-6b16-4b31-83a8-23c72b9b43c9,},Annotations:map[string]string{io.kubernetes.container.hash: 39480fb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00c8ad44bde54b9ebc44166efdb5a75edfd41f81e36b23c7cfbd30ffc285da4,PodSandboxId:e37c541af3d4f809c460905e6e6196eac2dcd6f6ae29ad755040c10c02755d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702419382428268972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-g7fpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c4678b0-40f1-421e-ac87-e9e47bd79710,},Annotations:map[string]string{io.kubernetes.container.hash: a493d575,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e472356d999c5b1652115529097
3808769c7accb4a315a7815222dc3645169f,PodSandboxId:f0fd97a895a537c03a38bb4db209d83bfae7c5887097933b247f3631d81ac49e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702419381919577616,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th7h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d2e0a6-469d-4810-a7e9-48e865f4621e,},Annotations:map[string]string{io.kubernetes.container.hash: f2d4638d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7be0d734c3c34539ab43aa8b1472460bd26231a0516ba8a9932c40deeb672b5,Pod
SandboxId:17d25d1c1a7eea1a4971ccd7462125ddf5f3d4ce8e378b9be531ef3e8f95e55b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702419381552706767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4,},Annotations:map[string]string{io.kubernetes.container.hash: fbe04732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5433b6543e5295f90a052e210fa4e968f477b002c6404e9bc1a1d7c148467,PodSa
ndboxId:cbeb0927dbcab441d253687ce5511208d4924eb4560b35f3252a52c9f3412957,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702419356284416915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba00d7476bfb5476c3ead87f78dd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2939cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f76324b7410131cec8907c259cbda54cab218ae7f870da6a11fff408031cfb76,PodSandboxId:498c6d9cdc29e93fd1477751b6fe015a3e1e45
93453c81e6f299e47ddc3b28bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702419354798338799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c4458edef971112e5bd646bdebed0d765d118c780b517c8616d2338c5ce060,PodSandboxId:18d314e482126305f28d4ae24f50caa9aa169489ef7f
d2a6d489e63906049acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702419354816843824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13baa2892fc1545a5ef788ef83536aea99c28ad90ce500722a5fbf0a49aeac5e,PodSandboxId:2b788a8753b6a3
35bb9179e0a55d4e48d54cd8924a0667d14b5a927224a77cf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702419354530312151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71dbb680de839b3d2b5b4bc8c3796902,},Annotations:map[string]string{io.kubernetes.container.hash: d4901da9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=369ae230-273b-4a08-97a2-465250e65eca name=/runtime.v1.RuntimeServi
ce/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.321247984Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=15ee9592-561d-494f-91ef-74f35f030659 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.321336495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=15ee9592-561d-494f-91ef-74f35f030659 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.322499902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7caacc65-98c3-4227-a03b-551d27908d9d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.322983175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702419568322966966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=7caacc65-98c3-4227-a03b-551d27908d9d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.323586277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a46925fd-cb85-47ea-bfde-0738e8a42a53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.323660279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a46925fd-cb85-47ea-bfde-0738e8a42a53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:19:28 ingress-addon-legacy-220067 crio[717]: time="2023-12-12 22:19:28.324590229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7660419f1bea649dc7b44a5adf87be7343653d2097c2f7de212b1c7aa6aff46f,PodSandboxId:26266ed973bd3a5399774c5bf992f2e4b1ffd6e9d01626574c5d4bdd03e0673f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1702419560026743255,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-zrck8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42e57c61-6ecb-4aa9-abc8-6781bed985b5,},Annotations:map[string]string{io.kubernetes.container.hash: acfc4782,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51c1ddba802999ae331fa430dc935b3a8876a2bbf1343cfb87cb5a554757a939,PodSandboxId:8e48f9c80200944f4071d3e7c808e90847502bfccab1b52dbb8fc8e3f66fc4b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1702419417390578949,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e4fd974-1e2b-49ab-9b6c-d12e26372b63,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e727de76,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c432c7b9b0952d269f6a517ba8a06d1e81e6a567c36d8fbb070b167cd6f1d1,PodSandboxId:17d25d1c1a7eea1a4971ccd7462125ddf5f3d4ce8e378b9be531ef3e8f95e55b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702419412142296296,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4,},Annotations:map[string]string{io.kubernetes.container.hash: fbe04732,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4750ea6c0a8f099ce470cdb5d61010417fe5f10ff9adae5be95e195699ebadae,PodSandboxId:5d581138c72effe937699d561487822c862e950198785c54ff0e6cc4e1381d3e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1702419398512069115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t7v4x,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 988eec5c-0905-424a-bf26-59590bdc5a36,},Annotations:map[string]string{io.kubernetes.container.hash: 6c9ad90e,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0153ec8d8c066a60c1bbca17166d79a7a3807f80f8bec3a03cb1a8767b7664ad,PodSandboxId:4f4509614d9f55715931ab0ec93d0062cbfc81663c2854a92877c1639b29774a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702419389524595483,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twkt7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 84d2cfa5-5883-4bc8-b5d0-5cf00f3b5b87,},Annotations:map[string]string{io.kubernetes.container.hash: 7a0f2456,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1af5a6aef503889b5da2c2c3df17b008a66e200205c187df5c0753c0c96b84,PodSandboxId:7131b67fdaf87529f36661962b3923309476f0d5a954fba369653139e38539f8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1702419389378797200,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4xsg7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3e341a85-6b16-4b31-83a8-23c72b9b43c9,},Annotations:map[string]string{io.kubernetes.container.hash: 39480fb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00c8ad44bde54b9ebc44166efdb5a75edfd41f81e36b23c7cfbd30ffc285da4,PodSandboxId:e37c541af3d4f809c460905e6e6196eac2dcd6f6ae29ad755040c10c02755d48,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1702419382428268972,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-g7fpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c4678b0-40f1-421e-ac87-e9e47bd79710,},Annotations:map[string]string{io.kubernetes.container.hash: a493d575,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e472356d999c5b1652115529097
3808769c7accb4a315a7815222dc3645169f,PodSandboxId:f0fd97a895a537c03a38bb4db209d83bfae7c5887097933b247f3631d81ac49e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1702419381919577616,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-th7h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75d2e0a6-469d-4810-a7e9-48e865f4621e,},Annotations:map[string]string{io.kubernetes.container.hash: f2d4638d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7be0d734c3c34539ab43aa8b1472460bd26231a0516ba8a9932c40deeb672b5,Pod
SandboxId:17d25d1c1a7eea1a4971ccd7462125ddf5f3d4ce8e378b9be531ef3e8f95e55b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702419381552706767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed2aef90-b84a-42f8-ad3e-c2ed9c5f5bd4,},Annotations:map[string]string{io.kubernetes.container.hash: fbe04732,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad5433b6543e5295f90a052e210fa4e968f477b002c6404e9bc1a1d7c148467,PodSa
ndboxId:cbeb0927dbcab441d253687ce5511208d4924eb4560b35f3252a52c9f3412957,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1702419356284416915,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba00d7476bfb5476c3ead87f78dd9e,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2939cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f76324b7410131cec8907c259cbda54cab218ae7f870da6a11fff408031cfb76,PodSandboxId:498c6d9cdc29e93fd1477751b6fe015a3e1e45
93453c81e6f299e47ddc3b28bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1702419354798338799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c4458edef971112e5bd646bdebed0d765d118c780b517c8616d2338c5ce060,PodSandboxId:18d314e482126305f28d4ae24f50caa9aa169489ef7f
d2a6d489e63906049acd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1702419354816843824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13baa2892fc1545a5ef788ef83536aea99c28ad90ce500722a5fbf0a49aeac5e,PodSandboxId:2b788a8753b6a3
35bb9179e0a55d4e48d54cd8924a0667d14b5a927224a77cf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1702419354530312151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-220067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71dbb680de839b3d2b5b4bc8c3796902,},Annotations:map[string]string{io.kubernetes.container.hash: d4901da9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a46925fd-cb85-47ea-bfde-0738e8a42a53 name=/runtime.v1.RuntimeServi
ce/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7660419f1bea6       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            8 seconds ago       Running             hello-world-app           0                   26266ed973bd3       hello-world-app-5f5d8b66bb-zrck8
	51c1ddba80299       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   8e48f9c802009       nginx
	28c432c7b9b09       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   2 minutes ago       Running             storage-provisioner       1                   17d25d1c1a7ee       storage-provisioner
	4750ea6c0a8f0       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   5d581138c72ef       ingress-nginx-controller-7fcf777cb7-t7v4x
	0153ec8d8c066       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   4f4509614d9f5       ingress-nginx-admission-patch-twkt7
	7c1af5a6aef50       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   7131b67fdaf87       ingress-nginx-admission-create-4xsg7
	e00c8ad44bde5       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   e37c541af3d4f       coredns-66bff467f8-g7fpf
	6e472356d999c       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   f0fd97a895a53       kube-proxy-th7h9
	d7be0d734c3c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   17d25d1c1a7ee       storage-provisioner
	8ad5433b6543e       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   cbeb0927dbcab       etcd-ingress-addon-legacy-220067
	e3c4458edef97       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   18d314e482126       kube-controller-manager-ingress-addon-legacy-220067
	f76324b741013       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   498c6d9cdc29e       kube-scheduler-ingress-addon-legacy-220067
	13baa2892fc15       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   2b788a8753b6a       kube-apiserver-ingress-addon-legacy-220067
	
	* 
	* ==> coredns [e00c8ad44bde54b9ebc44166efdb5a75edfd41f81e36b23c7cfbd30ffc285da4] <==
	* [INFO] 10.244.0.5:53221 - 58770 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061443s
	[INFO] 10.244.0.5:33563 - 41410 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097747s
	[INFO] 10.244.0.5:33563 - 55985 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098567s
	[INFO] 10.244.0.5:53221 - 5618 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030001s
	[INFO] 10.244.0.5:53221 - 26461 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030819s
	[INFO] 10.244.0.5:33563 - 25326 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098175s
	[INFO] 10.244.0.5:53221 - 59560 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026226s
	[INFO] 10.244.0.5:33563 - 19238 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000192639s
	[INFO] 10.244.0.5:53221 - 33950 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026829s
	[INFO] 10.244.0.5:53221 - 54544 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029132s
	[INFO] 10.244.0.5:53221 - 56417 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000033782s
	[INFO] 10.244.0.5:32940 - 57824 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087627s
	[INFO] 10.244.0.5:32940 - 1157 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000096322s
	[INFO] 10.244.0.5:55515 - 24317 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044937s
	[INFO] 10.244.0.5:55515 - 45356 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000252173s
	[INFO] 10.244.0.5:32940 - 28123 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00013122s
	[INFO] 10.244.0.5:32940 - 4951 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000083961s
	[INFO] 10.244.0.5:55515 - 7809 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054296s
	[INFO] 10.244.0.5:55515 - 51819 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000108582s
	[INFO] 10.244.0.5:32940 - 30358 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000171178s
	[INFO] 10.244.0.5:55515 - 60040 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073618s
	[INFO] 10.244.0.5:32940 - 41172 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000202703s
	[INFO] 10.244.0.5:32940 - 36495 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044287s
	[INFO] 10.244.0.5:55515 - 43965 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058424s
	[INFO] 10.244.0.5:55515 - 65416 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090683s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-220067
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-220067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=ingress-addon-legacy-220067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_16_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:15:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-220067
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:19:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:17:03 +0000   Tue, 12 Dec 2023 22:15:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:17:03 +0000   Tue, 12 Dec 2023 22:15:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:17:03 +0000   Tue, 12 Dec 2023 22:15:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:17:03 +0000   Tue, 12 Dec 2023 22:16:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    ingress-addon-legacy-220067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 97d46ecc5cae48dbb21b02448feef52a
	  System UUID:                97d46ecc-5cae-48db-b21b-02448feef52a
	  Boot ID:                    72e77e1e-e464-4f41-bddb-a47851240e84
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-zrck8                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 coredns-66bff467f8-g7fpf                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m9s
	  kube-system                 etcd-ingress-addon-legacy-220067                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-apiserver-ingress-addon-legacy-220067             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-220067    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-proxy-th7h9                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 kube-scheduler-ingress-addon-legacy-220067             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m35s (x4 over 3m35s)  kubelet     Node ingress-addon-legacy-220067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s (x5 over 3m35s)  kubelet     Node ingress-addon-legacy-220067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s (x4 over 3m35s)  kubelet     Node ingress-addon-legacy-220067 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m25s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m25s                  kubelet     Node ingress-addon-legacy-220067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s                  kubelet     Node ingress-addon-legacy-220067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s                  kubelet     Node ingress-addon-legacy-220067 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m15s                  kubelet     Node ingress-addon-legacy-220067 status is now: NodeReady
	  Normal  Starting                 3m6s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec12 22:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.095531] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.451886] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.546136] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147789] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.041280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.462491] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.111254] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.146676] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.115541] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.217074] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +8.006448] systemd-fstab-generator[1027]: Ignoring "noauto" for root device
	[  +2.802626] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 22:16] systemd-fstab-generator[1412]: Ignoring "noauto" for root device
	[ +19.440173] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.168702] kauditd_printk_skb: 13 callbacks suppressed
	[ +26.937158] kauditd_printk_skb: 21 callbacks suppressed
	[Dec12 22:19] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.790075] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [8ad5433b6543e5295f90a052e210fa4e968f477b002c6404e9bc1a1d7c148467] <==
	* raft2023/12/12 22:15:56 INFO: 44b3a0f32f80bb09 became follower at term 1
	raft2023/12/12 22:15:56 INFO: 44b3a0f32f80bb09 switched to configuration voters=(4950477381744769801)
	2023-12-12 22:15:56.396765 W | auth: simple token is not cryptographically signed
	2023-12-12 22:15:56.400451 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-12 22:15:56.402317 I | etcdserver: 44b3a0f32f80bb09 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-12 22:15:56.402530 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 22:15:56.402768 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/12 22:15:56 INFO: 44b3a0f32f80bb09 switched to configuration voters=(4950477381744769801)
	2023-12-12 22:15:56.403127 I | etcdserver/membership: added member 44b3a0f32f80bb09 [https://192.168.39.145:2380] to cluster 33ee9922f2bf4379
	2023-12-12 22:15:56.403259 I | embed: listening for peers on 192.168.39.145:2380
	raft2023/12/12 22:15:56 INFO: 44b3a0f32f80bb09 is starting a new election at term 1
	raft2023/12/12 22:15:56 INFO: 44b3a0f32f80bb09 became candidate at term 2
	raft2023/12/12 22:15:56 INFO: 44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 2
	raft2023/12/12 22:15:56 INFO: 44b3a0f32f80bb09 became leader at term 2
	raft2023/12/12 22:15:56 INFO: raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 2
	2023-12-12 22:15:56.790303 I | etcdserver: published {Name:ingress-addon-legacy-220067 ClientURLs:[https://192.168.39.145:2379]} to cluster 33ee9922f2bf4379
	2023-12-12 22:15:56.790411 I | embed: ready to serve client requests
	2023-12-12 22:15:56.790957 I | embed: ready to serve client requests
	2023-12-12 22:15:56.791758 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 22:15:56.791898 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-12 22:15:56.794658 I | embed: serving client requests on 192.168.39.145:2379
	2023-12-12 22:15:56.797897 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-12 22:15:56.798398 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-12 22:16:20.003918 W | etcdserver: read-only range request "key:\"/registry/daemonsets/kube-system/kube-proxy\" " with result "range_response_count:1 size:2927" took too long (161.316271ms) to execute
	2023-12-12 22:17:01.865315 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2214" took too long (134.317042ms) to execute
	
	* 
	* ==> kernel <==
	*  22:19:28 up 4 min,  0 users,  load average: 0.76, 0.34, 0.14
	Linux ingress-addon-legacy-220067 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [13baa2892fc1545a5ef788ef83536aea99c28ad90ce500722a5fbf0a49aeac5e] <==
	* I1212 22:15:59.769031       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E1212 22:15:59.781510       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.145, ResourceVersion: 0, AdditionalErrorMsg: 
	I1212 22:15:59.864655       1 cache.go:39] Caches are synced for autoregister controller
	I1212 22:15:59.864870       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 22:15:59.864883       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1212 22:15:59.864911       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1212 22:15:59.874619       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 22:16:00.761279       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1212 22:16:00.761372       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 22:16:00.770321       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1212 22:16:00.774443       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1212 22:16:00.774488       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1212 22:16:01.246498       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 22:16:01.286174       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1212 22:16:01.446328       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.145]
	I1212 22:16:01.447119       1 controller.go:609] quota admission added evaluator for: endpoints
	I1212 22:16:01.458320       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 22:16:02.126034       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1212 22:16:02.902598       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1212 22:16:03.005430       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1212 22:16:03.326757       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 22:16:19.376149       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1212 22:16:19.486582       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1212 22:16:27.267732       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1212 22:16:54.501666       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [e3c4458edef971112e5bd646bdebed0d765d118c780b517c8616d2338c5ce060] <==
	* I1212 22:16:19.423119       1 disruption.go:339] Sending events to api server.
	I1212 22:16:19.439854       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1212 22:16:19.474054       1 shared_informer.go:230] Caches are synced for taint 
	I1212 22:16:19.474434       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1212 22:16:19.475610       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-220067. Assuming now as a timestamp.
	I1212 22:16:19.475682       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I1212 22:16:19.475860       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1212 22:16:19.476047       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-220067", UID:"1c5610a8-9e7d-4e40-a286-2092ec5c0ba2", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-220067 event: Registered Node ingress-addon-legacy-220067 in Controller
	I1212 22:16:19.567944       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"fafaede1-bfb0-4935-bea5-195b3555e0b9", APIVersion:"apps/v1", ResourceVersion:"219", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-th7h9
	I1212 22:16:19.600143       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 22:16:19.607913       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 22:16:19.730522       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 22:16:19.730650       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 22:16:19.776751       1 shared_informer.go:230] Caches are synced for garbage collector 
	E1212 22:16:20.023641       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"fafaede1-bfb0-4935-bea5-195b3555e0b9", ResourceVersion:"219", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63838016163, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00009c6a0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc00009c6c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00009c6e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013b0d00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc00009c720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00009c740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00009c780)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000300000), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000052828), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000038770), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000f5c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000052888)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1212 22:16:27.204279       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"13a77afe-9630-4fde-9ebe-13c2121fc079", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1212 22:16:27.225456       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"8ab96616-16b9-4ac2-a60d-f809f7e09788", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-t7v4x
	I1212 22:16:27.307962       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c67ea345-f0f2-474e-9237-4257f7c52918", APIVersion:"batch/v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-4xsg7
	I1212 22:16:27.355914       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"41d7f2e1-5778-42e2-9d46-c926e0b804fc", APIVersion:"batch/v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-twkt7
	I1212 22:16:29.648317       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c67ea345-f0f2-474e-9237-4257f7c52918", APIVersion:"batch/v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 22:16:30.652513       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"41d7f2e1-5778-42e2-9d46-c926e0b804fc", APIVersion:"batch/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 22:19:16.809634       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"c40b5637-d5cb-4b70-8a06-6695bc096072", APIVersion:"apps/v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1212 22:19:16.837077       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"63fd6906-48dc-4026-bd10-80a3f6168a46", APIVersion:"apps/v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-zrck8
	E1212 22:19:25.494394       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-jcmqs" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [6e472356d999c5b16521155290973808769c7accb4a315a7815222dc3645169f] <==
	* W1212 22:16:22.134875       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1212 22:16:22.143758       1 node.go:136] Successfully retrieved node IP: 192.168.39.145
	I1212 22:16:22.143987       1 server_others.go:186] Using iptables Proxier.
	I1212 22:16:22.145167       1 server.go:583] Version: v1.18.20
	I1212 22:16:22.146868       1 config.go:315] Starting service config controller
	I1212 22:16:22.146959       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1212 22:16:22.147579       1 config.go:133] Starting endpoints config controller
	I1212 22:16:22.147667       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1212 22:16:22.247261       1 shared_informer.go:230] Caches are synced for service config 
	I1212 22:16:22.247936       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [f76324b7410131cec8907c259cbda54cab218ae7f870da6a11fff408031cfb76] <==
	* W1212 22:15:59.849021       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 22:15:59.893560       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 22:15:59.893662       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 22:15:59.895496       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1212 22:15:59.895614       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 22:15:59.895622       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 22:15:59.895633       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1212 22:15:59.912523       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:15:59.912643       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:15:59.912738       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:15:59.912853       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:15:59.912932       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:15:59.912979       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 22:15:59.913017       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 22:15:59.913067       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 22:15:59.913109       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:15:59.913156       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:15:59.913258       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 22:15:59.918467       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:16:00.777301       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:16:00.824306       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:16:00.893978       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:16:00.960310       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:16:01.077642       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1212 22:16:01.395911       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 22:15:26 UTC, ends at Tue 2023-12-12 22:19:28 UTC. --
	Dec 12 22:16:31 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:16:31.764285    1419 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d2cfa5-5883-4bc8-b5d0-5cf00f3b5b87-ingress-nginx-admission-token-8pmlp" (OuterVolumeSpecName: "ingress-nginx-admission-token-8pmlp") pod "84d2cfa5-5883-4bc8-b5d0-5cf00f3b5b87" (UID: "84d2cfa5-5883-4bc8-b5d0-5cf00f3b5b87"). InnerVolumeSpecName "ingress-nginx-admission-token-8pmlp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:16:31 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:16:31.849313    1419 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-8pmlp" (UniqueName: "kubernetes.io/secret/84d2cfa5-5883-4bc8-b5d0-5cf00f3b5b87-ingress-nginx-admission-token-8pmlp") on node "ingress-addon-legacy-220067" DevicePath ""
	Dec 12 22:16:39 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:16:39.553608    1419 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 22:16:39 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:16:39.677367    1419 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-dndkm" (UniqueName: "kubernetes.io/secret/6e647183-1aae-41a8-b755-8037848cf8dc-minikube-ingress-dns-token-dndkm") pod "kube-ingress-dns-minikube" (UID: "6e647183-1aae-41a8-b755-8037848cf8dc")
	Dec 12 22:16:52 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:16:52.109735    1419 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d7be0d734c3c34539ab43aa8b1472460bd26231a0516ba8a9932c40deeb672b5
	Dec 12 22:16:54 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:16:54.675634    1419 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 22:16:54 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:16:54.830627    1419 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-hdwhn" (UniqueName: "kubernetes.io/secret/9e4fd974-1e2b-49ab-9b6c-d12e26372b63-default-token-hdwhn") pod "nginx" (UID: "9e4fd974-1e2b-49ab-9b6c-d12e26372b63")
	Dec 12 22:19:16 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:16.833463    1419 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 22:19:16 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:16.937013    1419 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-hdwhn" (UniqueName: "kubernetes.io/secret/42e57c61-6ecb-4aa9-abc8-6781bed985b5-default-token-hdwhn") pod "hello-world-app-5f5d8b66bb-zrck8" (UID: "42e57c61-6ecb-4aa9-abc8-6781bed985b5")
	Dec 12 22:19:18 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:18.658770    1419 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4dce2983f0dac9894dd24a2bf2b3762f67458cebbe674f29d71e9344d046fa1b
	Dec 12 22:19:18 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:18.742467    1419 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-dndkm" (UniqueName: "kubernetes.io/secret/6e647183-1aae-41a8-b755-8037848cf8dc-minikube-ingress-dns-token-dndkm") pod "6e647183-1aae-41a8-b755-8037848cf8dc" (UID: "6e647183-1aae-41a8-b755-8037848cf8dc")
	Dec 12 22:19:18 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:18.749003    1419 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e647183-1aae-41a8-b755-8037848cf8dc-minikube-ingress-dns-token-dndkm" (OuterVolumeSpecName: "minikube-ingress-dns-token-dndkm") pod "6e647183-1aae-41a8-b755-8037848cf8dc" (UID: "6e647183-1aae-41a8-b755-8037848cf8dc"). InnerVolumeSpecName "minikube-ingress-dns-token-dndkm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:19:18 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:18.842800    1419 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-dndkm" (UniqueName: "kubernetes.io/secret/6e647183-1aae-41a8-b755-8037848cf8dc-minikube-ingress-dns-token-dndkm") on node "ingress-addon-legacy-220067" DevicePath ""
	Dec 12 22:19:19 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:19.046265    1419 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4dce2983f0dac9894dd24a2bf2b3762f67458cebbe674f29d71e9344d046fa1b
	Dec 12 22:19:19 ingress-addon-legacy-220067 kubelet[1419]: E1212 22:19:19.046961    1419 remote_runtime.go:295] ContainerStatus "4dce2983f0dac9894dd24a2bf2b3762f67458cebbe674f29d71e9344d046fa1b" from runtime service failed: rpc error: code = NotFound desc = could not find container "4dce2983f0dac9894dd24a2bf2b3762f67458cebbe674f29d71e9344d046fa1b": container with ID starting with 4dce2983f0dac9894dd24a2bf2b3762f67458cebbe674f29d71e9344d046fa1b not found: ID does not exist
	Dec 12 22:19:20 ingress-addon-legacy-220067 kubelet[1419]: E1212 22:19:20.761999    1419 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-t7v4x.17a035921999b45c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-t7v4x", UID:"988eec5c-0905-424a-bf26-59590bdc5a36", APIVersion:"v1", ResourceVersion:"431", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-220067"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15654fa2d1fa45c, ext:197943202307, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15654fa2d1fa45c, ext:197943202307, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-t7v4x.17a035921999b45c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 22:19:20 ingress-addon-legacy-220067 kubelet[1419]: E1212 22:19:20.778514    1419 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-t7v4x.17a035921999b45c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-t7v4x", UID:"988eec5c-0905-424a-bf26-59590bdc5a36", APIVersion:"v1", ResourceVersion:"431", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-220067"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15654fa2d1fa45c, ext:197943202307, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15654fa2ddcc4b5, ext:197955596891, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-t7v4x.17a035921999b45c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 22:19:23 ingress-addon-legacy-220067 kubelet[1419]: W1212 22:19:23.688954    1419 pod_container_deletor.go:77] Container "5d581138c72effe937699d561487822c862e950198785c54ff0e6cc4e1381d3e" not found in pod's containers
	Dec 12 22:19:24 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:24.866900    1419 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/988eec5c-0905-424a-bf26-59590bdc5a36-webhook-cert") pod "988eec5c-0905-424a-bf26-59590bdc5a36" (UID: "988eec5c-0905-424a-bf26-59590bdc5a36")
	Dec 12 22:19:24 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:24.866953    1419 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-gcdxl" (UniqueName: "kubernetes.io/secret/988eec5c-0905-424a-bf26-59590bdc5a36-ingress-nginx-token-gcdxl") pod "988eec5c-0905-424a-bf26-59590bdc5a36" (UID: "988eec5c-0905-424a-bf26-59590bdc5a36")
	Dec 12 22:19:24 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:24.870166    1419 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988eec5c-0905-424a-bf26-59590bdc5a36-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "988eec5c-0905-424a-bf26-59590bdc5a36" (UID: "988eec5c-0905-424a-bf26-59590bdc5a36"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:19:24 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:24.870448    1419 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988eec5c-0905-424a-bf26-59590bdc5a36-ingress-nginx-token-gcdxl" (OuterVolumeSpecName: "ingress-nginx-token-gcdxl") pod "988eec5c-0905-424a-bf26-59590bdc5a36" (UID: "988eec5c-0905-424a-bf26-59590bdc5a36"). InnerVolumeSpecName "ingress-nginx-token-gcdxl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:19:24 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:24.967372    1419 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/988eec5c-0905-424a-bf26-59590bdc5a36-webhook-cert") on node "ingress-addon-legacy-220067" DevicePath ""
	Dec 12 22:19:24 ingress-addon-legacy-220067 kubelet[1419]: I1212 22:19:24.967441    1419 reconciler.go:319] Volume detached for volume "ingress-nginx-token-gcdxl" (UniqueName: "kubernetes.io/secret/988eec5c-0905-424a-bf26-59590bdc5a36-ingress-nginx-token-gcdxl") on node "ingress-addon-legacy-220067" DevicePath ""
	Dec 12 22:19:25 ingress-addon-legacy-220067 kubelet[1419]: W1212 22:19:25.371040    1419 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/988eec5c-0905-424a-bf26-59590bdc5a36/volumes" does not exist
	
	* 
	* ==> storage-provisioner [28c432c7b9b0952d269f6a517ba8a06d1e81e6a567c36d8fbb070b167cd6f1d1] <==
	* I1212 22:16:52.253798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 22:16:52.272899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 22:16:52.272973       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 22:16:52.282913       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 22:16:52.283276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-220067_0bdc6c0d-d50c-40d4-b67c-acd635bfa38e!
	I1212 22:16:52.283908       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a90a9e1-1868-4656-9bbd-9e1ae8b7c10d", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-220067_0bdc6c0d-d50c-40d4-b67c-acd635bfa38e became leader
	I1212 22:16:52.383906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-220067_0bdc6c0d-d50c-40d4-b67c-acd635bfa38e!
	
	* 
	* ==> storage-provisioner [d7be0d734c3c34539ab43aa8b1472460bd26231a0516ba8a9932c40deeb672b5] <==
	* I1212 22:16:21.678520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 22:16:51.680173       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-220067 -n ingress-addon-legacy-220067
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-220067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (169.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-7fg9p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-7fg9p -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-7fg9p -- sh -c "ping -c 1 192.168.39.1": exit status 1 (209.204842ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-7fg9p): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-trmtr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-trmtr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-trmtr -- sh -c "ping -c 1 192.168.39.1": exit status 1 (200.44367ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-trmtr): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-054207 -n multinode-054207
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-054207 logs -n 25: (1.459588341s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-185823 ssh -- ls                    | mount-start-2-185823 | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-185823 ssh --                       | mount-start-2-185823 | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-185823                           | mount-start-2-185823 | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	| start   | -p mount-start-2-185823                           | mount-start-2-185823 | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:24 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-185823 | jenkins | v1.32.0 | 12 Dec 23 22:24 UTC |                     |
	|         | --profile mount-start-2-185823                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-185823 ssh -- ls                    | mount-start-2-185823 | jenkins | v1.32.0 | 12 Dec 23 22:24 UTC | 12 Dec 23 22:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-185823 ssh --                       | mount-start-2-185823 | jenkins | v1.32.0 | 12 Dec 23 22:24 UTC | 12 Dec 23 22:24 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-185823                           | mount-start-2-185823 | jenkins | v1.32.0 | 12 Dec 23 22:24 UTC | 12 Dec 23 22:24 UTC |
	| delete  | -p mount-start-1-165133                           | mount-start-1-165133 | jenkins | v1.32.0 | 12 Dec 23 22:24 UTC | 12 Dec 23 22:24 UTC |
	| start   | -p multinode-054207                               | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:24 UTC | 12 Dec 23 22:26 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- apply -f                   | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- rollout                    | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- get pods -o                | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- get pods -o                | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | busybox-5bc68d56bd-7fg9p --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | busybox-5bc68d56bd-trmtr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | busybox-5bc68d56bd-7fg9p --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | busybox-5bc68d56bd-trmtr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | busybox-5bc68d56bd-7fg9p -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | busybox-5bc68d56bd-trmtr -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- get pods -o                | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | busybox-5bc68d56bd-7fg9p                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC |                     |
	|         | busybox-5bc68d56bd-7fg9p -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC | 12 Dec 23 22:26 UTC |
	|         | busybox-5bc68d56bd-trmtr                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-054207 -- exec                       | multinode-054207     | jenkins | v1.32.0 | 12 Dec 23 22:26 UTC |                     |
	|         | busybox-5bc68d56bd-trmtr -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:24:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:24:25.190595   96656 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:24:25.190888   96656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:24:25.190898   96656 out.go:309] Setting ErrFile to fd 2...
	I1212 22:24:25.190903   96656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:24:25.191088   96656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 22:24:25.191800   96656 out.go:303] Setting JSON to false
	I1212 22:24:25.192779   96656 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":11219,"bootTime":1702408646,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:24:25.192852   96656 start.go:138] virtualization: kvm guest
	I1212 22:24:25.195200   96656 out.go:177] * [multinode-054207] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:24:25.197023   96656 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:24:25.197064   96656 notify.go:220] Checking for updates...
	I1212 22:24:25.198553   96656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:24:25.200139   96656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:24:25.201919   96656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:24:25.203746   96656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:24:25.205277   96656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:24:25.206901   96656 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:24:25.243945   96656 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 22:24:25.245454   96656 start.go:298] selected driver: kvm2
	I1212 22:24:25.245467   96656 start.go:902] validating driver "kvm2" against <nil>
	I1212 22:24:25.245479   96656 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:24:25.246195   96656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:24:25.246295   96656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:24:25.261482   96656 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:24:25.261532   96656 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:24:25.261733   96656 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:24:25.261809   96656 cni.go:84] Creating CNI manager for ""
	I1212 22:24:25.261821   96656 cni.go:136] 0 nodes found, recommending kindnet
	I1212 22:24:25.261830   96656 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 22:24:25.261840   96656 start_flags.go:323] config:
	{Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:24:25.261967   96656 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:24:25.263984   96656 out.go:177] * Starting control plane node multinode-054207 in cluster multinode-054207
	I1212 22:24:25.265448   96656 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:24:25.265496   96656 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:24:25.265509   96656 cache.go:56] Caching tarball of preloaded images
	I1212 22:24:25.265586   96656 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:24:25.265601   96656 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:24:25.265953   96656 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:24:25.265977   96656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json: {Name:mkd913afd408e40ec5e5d7a727423bd95e3902ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:24:25.266130   96656 start.go:365] acquiring machines lock for multinode-054207: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:24:25.266158   96656 start.go:369] acquired machines lock for "multinode-054207" in 14.159µs
	I1212 22:24:25.266171   96656 start.go:93] Provisioning new machine with config: &{Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:24:25.266228   96656 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 22:24:25.268063   96656 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 22:24:25.268254   96656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:24:25.268296   96656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:24:25.283302   96656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41939
	I1212 22:24:25.283803   96656 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:24:25.284405   96656 main.go:141] libmachine: Using API Version  1
	I1212 22:24:25.284436   96656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:24:25.284808   96656 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:24:25.285001   96656 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:24:25.285180   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:24:25.285348   96656 start.go:159] libmachine.API.Create for "multinode-054207" (driver="kvm2")
	I1212 22:24:25.285398   96656 client.go:168] LocalClient.Create starting
	I1212 22:24:25.285437   96656 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem
	I1212 22:24:25.285491   96656 main.go:141] libmachine: Decoding PEM data...
	I1212 22:24:25.285516   96656 main.go:141] libmachine: Parsing certificate...
	I1212 22:24:25.285591   96656 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem
	I1212 22:24:25.285618   96656 main.go:141] libmachine: Decoding PEM data...
	I1212 22:24:25.285636   96656 main.go:141] libmachine: Parsing certificate...
	I1212 22:24:25.285664   96656 main.go:141] libmachine: Running pre-create checks...
	I1212 22:24:25.285675   96656 main.go:141] libmachine: (multinode-054207) Calling .PreCreateCheck
	I1212 22:24:25.286038   96656 main.go:141] libmachine: (multinode-054207) Calling .GetConfigRaw
	I1212 22:24:25.286423   96656 main.go:141] libmachine: Creating machine...
	I1212 22:24:25.286437   96656 main.go:141] libmachine: (multinode-054207) Calling .Create
	I1212 22:24:25.286557   96656 main.go:141] libmachine: (multinode-054207) Creating KVM machine...
	I1212 22:24:25.287927   96656 main.go:141] libmachine: (multinode-054207) DBG | found existing default KVM network
	I1212 22:24:25.288599   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:25.288434   96679 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1212 22:24:25.294431   96656 main.go:141] libmachine: (multinode-054207) DBG | trying to create private KVM network mk-multinode-054207 192.168.39.0/24...
	I1212 22:24:25.367219   96656 main.go:141] libmachine: (multinode-054207) DBG | private KVM network mk-multinode-054207 192.168.39.0/24 created
	I1212 22:24:25.367272   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:25.367193   96679 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:24:25.367325   96656 main.go:141] libmachine: (multinode-054207) Setting up store path in /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207 ...
	I1212 22:24:25.367357   96656 main.go:141] libmachine: (multinode-054207) Building disk image from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 22:24:25.367436   96656 main.go:141] libmachine: (multinode-054207) Downloading /home/jenkins/minikube-integration/17761-76611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 22:24:25.597259   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:25.597121   96679 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa...
	I1212 22:24:25.737723   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:25.737571   96679 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/multinode-054207.rawdisk...
	I1212 22:24:25.737763   96656 main.go:141] libmachine: (multinode-054207) DBG | Writing magic tar header
	I1212 22:24:25.737779   96656 main.go:141] libmachine: (multinode-054207) DBG | Writing SSH key tar header
	I1212 22:24:25.737788   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:25.737690   96679 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207 ...
	I1212 22:24:25.737809   96656 main.go:141] libmachine: (multinode-054207) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207
	I1212 22:24:25.737820   96656 main.go:141] libmachine: (multinode-054207) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines
	I1212 22:24:25.737834   96656 main.go:141] libmachine: (multinode-054207) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207 (perms=drwx------)
	I1212 22:24:25.737852   96656 main.go:141] libmachine: (multinode-054207) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines (perms=drwxr-xr-x)
	I1212 22:24:25.737864   96656 main.go:141] libmachine: (multinode-054207) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube (perms=drwxr-xr-x)
	I1212 22:24:25.737872   96656 main.go:141] libmachine: (multinode-054207) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:24:25.737882   96656 main.go:141] libmachine: (multinode-054207) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611
	I1212 22:24:25.737891   96656 main.go:141] libmachine: (multinode-054207) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 22:24:25.737899   96656 main.go:141] libmachine: (multinode-054207) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611 (perms=drwxrwxr-x)
	I1212 22:24:25.737905   96656 main.go:141] libmachine: (multinode-054207) DBG | Checking permissions on dir: /home/jenkins
	I1212 22:24:25.737916   96656 main.go:141] libmachine: (multinode-054207) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 22:24:25.737930   96656 main.go:141] libmachine: (multinode-054207) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 22:24:25.737939   96656 main.go:141] libmachine: (multinode-054207) Creating domain...
	I1212 22:24:25.737949   96656 main.go:141] libmachine: (multinode-054207) DBG | Checking permissions on dir: /home
	I1212 22:24:25.737960   96656 main.go:141] libmachine: (multinode-054207) DBG | Skipping /home - not owner
	I1212 22:24:25.739120   96656 main.go:141] libmachine: (multinode-054207) define libvirt domain using xml: 
	I1212 22:24:25.739145   96656 main.go:141] libmachine: (multinode-054207) <domain type='kvm'>
	I1212 22:24:25.739157   96656 main.go:141] libmachine: (multinode-054207)   <name>multinode-054207</name>
	I1212 22:24:25.739172   96656 main.go:141] libmachine: (multinode-054207)   <memory unit='MiB'>2200</memory>
	I1212 22:24:25.739195   96656 main.go:141] libmachine: (multinode-054207)   <vcpu>2</vcpu>
	I1212 22:24:25.739212   96656 main.go:141] libmachine: (multinode-054207)   <features>
	I1212 22:24:25.739221   96656 main.go:141] libmachine: (multinode-054207)     <acpi/>
	I1212 22:24:25.739232   96656 main.go:141] libmachine: (multinode-054207)     <apic/>
	I1212 22:24:25.739273   96656 main.go:141] libmachine: (multinode-054207)     <pae/>
	I1212 22:24:25.739320   96656 main.go:141] libmachine: (multinode-054207)     
	I1212 22:24:25.739353   96656 main.go:141] libmachine: (multinode-054207)   </features>
	I1212 22:24:25.739373   96656 main.go:141] libmachine: (multinode-054207)   <cpu mode='host-passthrough'>
	I1212 22:24:25.739389   96656 main.go:141] libmachine: (multinode-054207)   
	I1212 22:24:25.739403   96656 main.go:141] libmachine: (multinode-054207)   </cpu>
	I1212 22:24:25.739415   96656 main.go:141] libmachine: (multinode-054207)   <os>
	I1212 22:24:25.739473   96656 main.go:141] libmachine: (multinode-054207)     <type>hvm</type>
	I1212 22:24:25.739502   96656 main.go:141] libmachine: (multinode-054207)     <boot dev='cdrom'/>
	I1212 22:24:25.739518   96656 main.go:141] libmachine: (multinode-054207)     <boot dev='hd'/>
	I1212 22:24:25.739531   96656 main.go:141] libmachine: (multinode-054207)     <bootmenu enable='no'/>
	I1212 22:24:25.739552   96656 main.go:141] libmachine: (multinode-054207)   </os>
	I1212 22:24:25.739563   96656 main.go:141] libmachine: (multinode-054207)   <devices>
	I1212 22:24:25.739579   96656 main.go:141] libmachine: (multinode-054207)     <disk type='file' device='cdrom'>
	I1212 22:24:25.739600   96656 main.go:141] libmachine: (multinode-054207)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/boot2docker.iso'/>
	I1212 22:24:25.739624   96656 main.go:141] libmachine: (multinode-054207)       <target dev='hdc' bus='scsi'/>
	I1212 22:24:25.739636   96656 main.go:141] libmachine: (multinode-054207)       <readonly/>
	I1212 22:24:25.739648   96656 main.go:141] libmachine: (multinode-054207)     </disk>
	I1212 22:24:25.739661   96656 main.go:141] libmachine: (multinode-054207)     <disk type='file' device='disk'>
	I1212 22:24:25.739683   96656 main.go:141] libmachine: (multinode-054207)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 22:24:25.739700   96656 main.go:141] libmachine: (multinode-054207)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/multinode-054207.rawdisk'/>
	I1212 22:24:25.739711   96656 main.go:141] libmachine: (multinode-054207)       <target dev='hda' bus='virtio'/>
	I1212 22:24:25.739719   96656 main.go:141] libmachine: (multinode-054207)     </disk>
	I1212 22:24:25.739733   96656 main.go:141] libmachine: (multinode-054207)     <interface type='network'>
	I1212 22:24:25.739746   96656 main.go:141] libmachine: (multinode-054207)       <source network='mk-multinode-054207'/>
	I1212 22:24:25.739761   96656 main.go:141] libmachine: (multinode-054207)       <model type='virtio'/>
	I1212 22:24:25.739770   96656 main.go:141] libmachine: (multinode-054207)     </interface>
	I1212 22:24:25.739779   96656 main.go:141] libmachine: (multinode-054207)     <interface type='network'>
	I1212 22:24:25.739786   96656 main.go:141] libmachine: (multinode-054207)       <source network='default'/>
	I1212 22:24:25.739792   96656 main.go:141] libmachine: (multinode-054207)       <model type='virtio'/>
	I1212 22:24:25.739802   96656 main.go:141] libmachine: (multinode-054207)     </interface>
	I1212 22:24:25.739823   96656 main.go:141] libmachine: (multinode-054207)     <serial type='pty'>
	I1212 22:24:25.739839   96656 main.go:141] libmachine: (multinode-054207)       <target port='0'/>
	I1212 22:24:25.739856   96656 main.go:141] libmachine: (multinode-054207)     </serial>
	I1212 22:24:25.739869   96656 main.go:141] libmachine: (multinode-054207)     <console type='pty'>
	I1212 22:24:25.739893   96656 main.go:141] libmachine: (multinode-054207)       <target type='serial' port='0'/>
	I1212 22:24:25.739904   96656 main.go:141] libmachine: (multinode-054207)     </console>
	I1212 22:24:25.739914   96656 main.go:141] libmachine: (multinode-054207)     <rng model='virtio'>
	I1212 22:24:25.739923   96656 main.go:141] libmachine: (multinode-054207)       <backend model='random'>/dev/random</backend>
	I1212 22:24:25.739931   96656 main.go:141] libmachine: (multinode-054207)     </rng>
	I1212 22:24:25.739939   96656 main.go:141] libmachine: (multinode-054207)     
	I1212 22:24:25.739968   96656 main.go:141] libmachine: (multinode-054207)     
	I1212 22:24:25.739994   96656 main.go:141] libmachine: (multinode-054207)   </devices>
	I1212 22:24:25.740009   96656 main.go:141] libmachine: (multinode-054207) </domain>
	I1212 22:24:25.740020   96656 main.go:141] libmachine: (multinode-054207) 
	I1212 22:24:25.744086   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:65:89:5c in network default
	I1212 22:24:25.744663   96656 main.go:141] libmachine: (multinode-054207) Ensuring networks are active...
	I1212 22:24:25.744680   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:25.745343   96656 main.go:141] libmachine: (multinode-054207) Ensuring network default is active
	I1212 22:24:25.745686   96656 main.go:141] libmachine: (multinode-054207) Ensuring network mk-multinode-054207 is active
	I1212 22:24:25.746154   96656 main.go:141] libmachine: (multinode-054207) Getting domain xml...
	I1212 22:24:25.746871   96656 main.go:141] libmachine: (multinode-054207) Creating domain...
	I1212 22:24:26.968351   96656 main.go:141] libmachine: (multinode-054207) Waiting to get IP...
	I1212 22:24:26.969220   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:26.969738   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:26.969780   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:26.969727   96679 retry.go:31] will retry after 250.350276ms: waiting for machine to come up
	I1212 22:24:27.221321   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:27.221742   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:27.221770   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:27.221687   96679 retry.go:31] will retry after 304.625559ms: waiting for machine to come up
	I1212 22:24:27.528281   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:27.528720   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:27.528752   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:27.528660   96679 retry.go:31] will retry after 452.757039ms: waiting for machine to come up
	I1212 22:24:27.983294   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:27.983737   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:27.983770   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:27.983667   96679 retry.go:31] will retry after 441.311932ms: waiting for machine to come up
	I1212 22:24:28.426171   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:28.426667   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:28.426696   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:28.426593   96679 retry.go:31] will retry after 566.019307ms: waiting for machine to come up
	I1212 22:24:28.995348   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:28.996112   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:28.996196   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:28.996089   96679 retry.go:31] will retry after 764.564451ms: waiting for machine to come up
	I1212 22:24:29.761970   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:29.762378   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:29.762410   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:29.762328   96679 retry.go:31] will retry after 1.141118889s: waiting for machine to come up
	I1212 22:24:30.905114   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:30.905572   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:30.905594   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:30.905521   96679 retry.go:31] will retry after 1.004806276s: waiting for machine to come up
	I1212 22:24:31.911772   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:31.912161   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:31.912219   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:31.912111   96679 retry.go:31] will retry after 1.214819895s: waiting for machine to come up
	I1212 22:24:33.128661   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:33.129065   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:33.129102   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:33.129010   96679 retry.go:31] will retry after 2.237925005s: waiting for machine to come up
	I1212 22:24:35.369042   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:35.369453   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:35.369487   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:35.369403   96679 retry.go:31] will retry after 2.463566434s: waiting for machine to come up
	I1212 22:24:37.835273   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:37.835709   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:37.835745   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:37.835641   96679 retry.go:31] will retry after 2.870551816s: waiting for machine to come up
	I1212 22:24:40.708081   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:40.708438   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:40.708469   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:40.708389   96679 retry.go:31] will retry after 4.456515518s: waiting for machine to come up
	I1212 22:24:45.169991   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:45.170491   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:24:45.170515   96656 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:24:45.170450   96679 retry.go:31] will retry after 3.551710128s: waiting for machine to come up
	I1212 22:24:48.726171   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:48.726590   96656 main.go:141] libmachine: (multinode-054207) Found IP for machine: 192.168.39.172
	I1212 22:24:48.726622   96656 main.go:141] libmachine: (multinode-054207) Reserving static IP address...
	I1212 22:24:48.726632   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has current primary IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:48.727005   96656 main.go:141] libmachine: (multinode-054207) DBG | unable to find host DHCP lease matching {name: "multinode-054207", mac: "52:54:00:7d:bc:d2", ip: "192.168.39.172"} in network mk-multinode-054207
	I1212 22:24:48.808516   96656 main.go:141] libmachine: (multinode-054207) Reserved static IP address: 192.168.39.172
	I1212 22:24:48.808544   96656 main.go:141] libmachine: (multinode-054207) Waiting for SSH to be available...
	I1212 22:24:48.808557   96656 main.go:141] libmachine: (multinode-054207) DBG | Getting to WaitForSSH function...
	I1212 22:24:48.811223   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:48.811658   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:48.811691   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:48.811804   96656 main.go:141] libmachine: (multinode-054207) DBG | Using SSH client type: external
	I1212 22:24:48.811836   96656 main.go:141] libmachine: (multinode-054207) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa (-rw-------)
	I1212 22:24:48.811868   96656 main.go:141] libmachine: (multinode-054207) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 22:24:48.811883   96656 main.go:141] libmachine: (multinode-054207) DBG | About to run SSH command:
	I1212 22:24:48.811896   96656 main.go:141] libmachine: (multinode-054207) DBG | exit 0
	I1212 22:24:48.902975   96656 main.go:141] libmachine: (multinode-054207) DBG | SSH cmd err, output: <nil>: 
	I1212 22:24:48.903336   96656 main.go:141] libmachine: (multinode-054207) KVM machine creation complete!
	I1212 22:24:48.903635   96656 main.go:141] libmachine: (multinode-054207) Calling .GetConfigRaw
	I1212 22:24:48.904179   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:24:48.904422   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:24:48.904668   96656 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 22:24:48.904707   96656 main.go:141] libmachine: (multinode-054207) Calling .GetState
	I1212 22:24:48.906041   96656 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 22:24:48.906057   96656 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 22:24:48.906073   96656 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 22:24:48.906080   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:48.908342   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:48.908754   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:48.908782   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:48.908955   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:48.909135   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:48.909332   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:48.909527   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:48.909762   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:24:48.910128   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:24:48.910144   96656 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 22:24:49.030814   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:24:49.030844   96656 main.go:141] libmachine: Detecting the provisioner...
	I1212 22:24:49.030855   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:49.033769   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.034120   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:49.034149   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.034323   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:49.034533   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.034698   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.034844   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:49.035107   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:24:49.035603   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:24:49.035621   96656 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 22:24:49.155875   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g161fa11-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 22:24:49.156007   96656 main.go:141] libmachine: found compatible host: buildroot
	I1212 22:24:49.156026   96656 main.go:141] libmachine: Provisioning with buildroot...
	I1212 22:24:49.156040   96656 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:24:49.156342   96656 buildroot.go:166] provisioning hostname "multinode-054207"
	I1212 22:24:49.156369   96656 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:24:49.156579   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:49.159134   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.159493   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:49.159527   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.159776   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:49.160006   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.160208   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.160356   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:49.160532   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:24:49.160875   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:24:49.160893   96656 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-054207 && echo "multinode-054207" | sudo tee /etc/hostname
	I1212 22:24:49.292237   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-054207
	
	I1212 22:24:49.292272   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:49.294834   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.295219   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:49.295269   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.295396   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:49.295611   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.295788   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.295938   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:49.296083   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:24:49.296454   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:24:49.296473   96656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-054207' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-054207/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-054207' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:24:49.423132   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:24:49.423164   96656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 22:24:49.423199   96656 buildroot.go:174] setting up certificates
	I1212 22:24:49.423210   96656 provision.go:83] configureAuth start
	I1212 22:24:49.423219   96656 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:24:49.423555   96656 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:24:49.426202   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.426573   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:49.426611   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.426733   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:49.429119   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.429476   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:49.429509   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.429653   96656 provision.go:138] copyHostCerts
	I1212 22:24:49.429687   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:24:49.429736   96656 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 22:24:49.429751   96656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:24:49.429804   96656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 22:24:49.429887   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:24:49.429903   96656 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 22:24:49.429912   96656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:24:49.429934   96656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 22:24:49.429987   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:24:49.430002   96656 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 22:24:49.430008   96656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:24:49.430024   96656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 22:24:49.430079   96656 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.multinode-054207 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube multinode-054207]
	I1212 22:24:49.604035   96656 provision.go:172] copyRemoteCerts
	I1212 22:24:49.604159   96656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:24:49.604191   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:49.607001   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.607438   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:49.607478   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.607679   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:49.607875   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.608066   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:49.608253   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:24:49.697511   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:24:49.697586   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 22:24:49.719675   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:24:49.719742   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 22:24:49.741699   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:24:49.741791   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:24:49.764199   96656 provision.go:86] duration metric: configureAuth took 340.972435ms
	I1212 22:24:49.764236   96656 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:24:49.764430   96656 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:24:49.764519   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:49.767209   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.767540   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:49.767583   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:49.767734   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:49.767909   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.768070   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:49.768160   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:49.768303   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:24:49.768631   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:24:49.768646   96656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:24:50.084877   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:24:50.084908   96656 main.go:141] libmachine: Checking connection to Docker...
	I1212 22:24:50.084919   96656 main.go:141] libmachine: (multinode-054207) Calling .GetURL
	I1212 22:24:50.086230   96656 main.go:141] libmachine: (multinode-054207) DBG | Using libvirt version 6000000
	I1212 22:24:50.088537   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.089000   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:50.089044   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.089221   96656 main.go:141] libmachine: Docker is up and running!
	I1212 22:24:50.089237   96656 main.go:141] libmachine: Reticulating splines...
	I1212 22:24:50.089249   96656 client.go:171] LocalClient.Create took 24.803835521s
	I1212 22:24:50.089273   96656 start.go:167] duration metric: libmachine.API.Create for "multinode-054207" took 24.803927853s
	I1212 22:24:50.089283   96656 start.go:300] post-start starting for "multinode-054207" (driver="kvm2")
	I1212 22:24:50.089294   96656 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:24:50.089316   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:24:50.089565   96656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:24:50.089591   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:50.091917   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.092209   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:50.092243   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.092445   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:50.092618   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:50.092770   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:50.092908   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:24:50.181165   96656 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:24:50.184899   96656 command_runner.go:130] > NAME=Buildroot
	I1212 22:24:50.184921   96656 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 22:24:50.184926   96656 command_runner.go:130] > ID=buildroot
	I1212 22:24:50.184935   96656 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 22:24:50.184943   96656 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 22:24:50.185020   96656 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:24:50.185041   96656 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 22:24:50.185112   96656 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 22:24:50.185224   96656 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 22:24:50.185243   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /etc/ssl/certs/838252.pem
	I1212 22:24:50.185374   96656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:24:50.194260   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:24:50.216238   96656 start.go:303] post-start completed in 126.938867ms
	I1212 22:24:50.216294   96656 main.go:141] libmachine: (multinode-054207) Calling .GetConfigRaw
	I1212 22:24:50.216901   96656 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:24:50.219481   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.219822   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:50.219851   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.220073   96656 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:24:50.220292   96656 start.go:128] duration metric: createHost completed in 24.954051977s
	I1212 22:24:50.220319   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:50.222754   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.223085   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:50.223113   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.223292   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:50.223475   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:50.223612   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:50.223736   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:50.223937   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:24:50.224255   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:24:50.224267   96656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:24:50.343776   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702419890.313614005
	
	I1212 22:24:50.343799   96656 fix.go:206] guest clock: 1702419890.313614005
	I1212 22:24:50.343807   96656 fix.go:219] Guest: 2023-12-12 22:24:50.313614005 +0000 UTC Remote: 2023-12-12 22:24:50.220307024 +0000 UTC m=+25.090033816 (delta=93.306981ms)
	I1212 22:24:50.343826   96656 fix.go:190] guest clock delta is within tolerance: 93.306981ms
	I1212 22:24:50.343838   96656 start.go:83] releasing machines lock for "multinode-054207", held for 25.077667139s
	I1212 22:24:50.343874   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:24:50.344152   96656 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:24:50.346718   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.347056   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:50.347102   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.347221   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:24:50.347753   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:24:50.347934   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:24:50.348027   96656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:24:50.348075   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:50.348134   96656 ssh_runner.go:195] Run: cat /version.json
	I1212 22:24:50.348232   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:24:50.350714   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.350869   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.351103   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:50.351129   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.351157   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:50.351204   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:50.351261   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:50.351441   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:50.351453   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:24:50.351599   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:24:50.351616   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:50.351772   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:24:50.351840   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:24:50.351961   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:24:50.436279   96656 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 22:24:50.436775   96656 ssh_runner.go:195] Run: systemctl --version
	I1212 22:24:50.460941   96656 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 22:24:50.460992   96656 command_runner.go:130] > systemd 247 (247)
	I1212 22:24:50.461005   96656 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 22:24:50.461056   96656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:24:50.627278   96656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:24:50.633255   96656 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 22:24:50.633330   96656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:24:50.633391   96656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:24:50.648227   96656 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 22:24:50.648331   96656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:24:50.648351   96656 start.go:475] detecting cgroup driver to use...
	I1212 22:24:50.648422   96656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:24:50.665182   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:24:50.677679   96656 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:24:50.677780   96656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:24:50.690700   96656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:24:50.703852   96656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:24:50.803747   96656 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 22:24:50.803850   96656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:24:50.818249   96656 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 22:24:50.917806   96656 docker.go:219] disabling docker service ...
	I1212 22:24:50.917871   96656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:24:50.932370   96656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:24:50.944415   96656 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 22:24:50.944584   96656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:24:50.958856   96656 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 22:24:51.054618   96656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:24:51.068904   96656 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 22:24:51.069307   96656 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 22:24:51.173445   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:24:51.187103   96656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:24:51.204963   96656 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 22:24:51.205021   96656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:24:51.205085   96656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:24:51.214858   96656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:24:51.214942   96656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:24:51.224715   96656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:24:51.234491   96656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:24:51.244042   96656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:24:51.253906   96656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:24:51.262084   96656 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:24:51.262143   96656 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:24:51.262191   96656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 22:24:51.275437   96656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:24:51.284367   96656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:24:51.397827   96656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:24:51.570752   96656 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:24:51.570822   96656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:24:51.575381   96656 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 22:24:51.575412   96656 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 22:24:51.575447   96656 command_runner.go:130] > Device: 16h/22d	Inode: 724         Links: 1
	I1212 22:24:51.575455   96656 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:24:51.575463   96656 command_runner.go:130] > Access: 2023-12-12 22:24:51.528578701 +0000
	I1212 22:24:51.575470   96656 command_runner.go:130] > Modify: 2023-12-12 22:24:51.528578701 +0000
	I1212 22:24:51.575476   96656 command_runner.go:130] > Change: 2023-12-12 22:24:51.528578701 +0000
	I1212 22:24:51.575480   96656 command_runner.go:130] >  Birth: -
	I1212 22:24:51.575876   96656 start.go:543] Will wait 60s for crictl version
	I1212 22:24:51.575920   96656 ssh_runner.go:195] Run: which crictl
	I1212 22:24:51.579513   96656 command_runner.go:130] > /usr/bin/crictl
	I1212 22:24:51.579817   96656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:24:51.621047   96656 command_runner.go:130] > Version:  0.1.0
	I1212 22:24:51.621072   96656 command_runner.go:130] > RuntimeName:  cri-o
	I1212 22:24:51.621077   96656 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 22:24:51.621082   96656 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 22:24:51.622504   96656 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 22:24:51.622631   96656 ssh_runner.go:195] Run: crio --version
	I1212 22:24:51.674152   96656 command_runner.go:130] > crio version 1.24.1
	I1212 22:24:51.674178   96656 command_runner.go:130] > Version:          1.24.1
	I1212 22:24:51.674185   96656 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:24:51.674189   96656 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:24:51.674195   96656 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:24:51.674200   96656 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:24:51.674204   96656 command_runner.go:130] > Compiler:         gc
	I1212 22:24:51.674209   96656 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:24:51.674222   96656 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:24:51.674229   96656 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:24:51.674233   96656 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:24:51.674237   96656 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:24:51.675331   96656 ssh_runner.go:195] Run: crio --version
	I1212 22:24:51.720588   96656 command_runner.go:130] > crio version 1.24.1
	I1212 22:24:51.720615   96656 command_runner.go:130] > Version:          1.24.1
	I1212 22:24:51.720621   96656 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:24:51.720626   96656 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:24:51.720632   96656 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:24:51.720636   96656 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:24:51.720640   96656 command_runner.go:130] > Compiler:         gc
	I1212 22:24:51.720644   96656 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:24:51.720653   96656 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:24:51.720660   96656 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:24:51.720667   96656 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:24:51.720671   96656 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:24:51.725102   96656 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 22:24:51.726515   96656 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:24:51.729126   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:51.729475   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:24:51.729521   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:24:51.729703   96656 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 22:24:51.733871   96656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:24:51.746699   96656 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:24:51.746753   96656 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:24:51.782380   96656 command_runner.go:130] > {
	I1212 22:24:51.782403   96656 command_runner.go:130] >   "images": [
	I1212 22:24:51.782409   96656 command_runner.go:130] >   ]
	I1212 22:24:51.782414   96656 command_runner.go:130] > }
	I1212 22:24:51.784013   96656 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 22:24:51.784087   96656 ssh_runner.go:195] Run: which lz4
	I1212 22:24:51.788129   96656 command_runner.go:130] > /usr/bin/lz4
	I1212 22:24:51.788159   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 22:24:51.788253   96656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 22:24:51.792122   96656 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:24:51.792322   96656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:24:51.792356   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 22:24:53.661499   96656 crio.go:444] Took 1.873269 seconds to copy over tarball
	I1212 22:24:53.661567   96656 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 22:24:56.431534   96656 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.769941888s)
	I1212 22:24:56.431562   96656 crio.go:451] Took 2.770036 seconds to extract the tarball
	I1212 22:24:56.431571   96656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 22:24:56.471466   96656 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:24:56.538858   96656 command_runner.go:130] > {
	I1212 22:24:56.538884   96656 command_runner.go:130] >   "images": [
	I1212 22:24:56.538891   96656 command_runner.go:130] >     {
	I1212 22:24:56.538902   96656 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1212 22:24:56.538909   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.538919   96656 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 22:24:56.538925   96656 command_runner.go:130] >       ],
	I1212 22:24:56.538933   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.538960   96656 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 22:24:56.538976   96656 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1212 22:24:56.538987   96656 command_runner.go:130] >       ],
	I1212 22:24:56.538997   96656 command_runner.go:130] >       "size": "65258016",
	I1212 22:24:56.539008   96656 command_runner.go:130] >       "uid": null,
	I1212 22:24:56.539022   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.539033   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.539042   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.539048   96656 command_runner.go:130] >     },
	I1212 22:24:56.539055   96656 command_runner.go:130] >     {
	I1212 22:24:56.539068   96656 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 22:24:56.539077   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.539088   96656 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 22:24:56.539096   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539105   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.539121   96656 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 22:24:56.539143   96656 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 22:24:56.539154   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539169   96656 command_runner.go:130] >       "size": "31470524",
	I1212 22:24:56.539179   96656 command_runner.go:130] >       "uid": null,
	I1212 22:24:56.539188   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.539197   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.539206   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.539217   96656 command_runner.go:130] >     },
	I1212 22:24:56.539225   96656 command_runner.go:130] >     {
	I1212 22:24:56.539248   96656 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1212 22:24:56.539256   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.539265   96656 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 22:24:56.539274   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539280   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.539294   96656 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1212 22:24:56.539307   96656 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1212 22:24:56.539317   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539326   96656 command_runner.go:130] >       "size": "53621675",
	I1212 22:24:56.539335   96656 command_runner.go:130] >       "uid": null,
	I1212 22:24:56.539344   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.539353   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.539360   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.539368   96656 command_runner.go:130] >     },
	I1212 22:24:56.539377   96656 command_runner.go:130] >     {
	I1212 22:24:56.539390   96656 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1212 22:24:56.539404   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.539414   96656 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 22:24:56.539423   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539432   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.539447   96656 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1212 22:24:56.539461   96656 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1212 22:24:56.539484   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539495   96656 command_runner.go:130] >       "size": "295456551",
	I1212 22:24:56.539504   96656 command_runner.go:130] >       "uid": {
	I1212 22:24:56.539516   96656 command_runner.go:130] >         "value": "0"
	I1212 22:24:56.539525   96656 command_runner.go:130] >       },
	I1212 22:24:56.539533   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.539543   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.539553   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.539561   96656 command_runner.go:130] >     },
	I1212 22:24:56.539567   96656 command_runner.go:130] >     {
	I1212 22:24:56.539578   96656 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1212 22:24:56.539591   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.539605   96656 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 22:24:56.539614   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539622   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.539644   96656 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1212 22:24:56.539659   96656 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1212 22:24:56.539668   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539678   96656 command_runner.go:130] >       "size": "127226832",
	I1212 22:24:56.539688   96656 command_runner.go:130] >       "uid": {
	I1212 22:24:56.539698   96656 command_runner.go:130] >         "value": "0"
	I1212 22:24:56.539708   96656 command_runner.go:130] >       },
	I1212 22:24:56.539715   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.539724   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.539734   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.539742   96656 command_runner.go:130] >     },
	I1212 22:24:56.539751   96656 command_runner.go:130] >     {
	I1212 22:24:56.539765   96656 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1212 22:24:56.539775   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.539785   96656 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 22:24:56.539797   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539806   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.539820   96656 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 22:24:56.539836   96656 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1212 22:24:56.539844   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539853   96656 command_runner.go:130] >       "size": "123261750",
	I1212 22:24:56.539862   96656 command_runner.go:130] >       "uid": {
	I1212 22:24:56.539868   96656 command_runner.go:130] >         "value": "0"
	I1212 22:24:56.539877   96656 command_runner.go:130] >       },
	I1212 22:24:56.539883   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.539893   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.539902   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.539911   96656 command_runner.go:130] >     },
	I1212 22:24:56.539921   96656 command_runner.go:130] >     {
	I1212 22:24:56.539933   96656 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1212 22:24:56.539942   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.539953   96656 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 22:24:56.539962   96656 command_runner.go:130] >       ],
	I1212 22:24:56.539973   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.539988   96656 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1212 22:24:56.540002   96656 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 22:24:56.540011   96656 command_runner.go:130] >       ],
	I1212 22:24:56.540018   96656 command_runner.go:130] >       "size": "74749335",
	I1212 22:24:56.540027   96656 command_runner.go:130] >       "uid": null,
	I1212 22:24:56.540037   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.540047   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.540055   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.540064   96656 command_runner.go:130] >     },
	I1212 22:24:56.540072   96656 command_runner.go:130] >     {
	I1212 22:24:56.540083   96656 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1212 22:24:56.540093   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.540102   96656 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 22:24:56.540111   96656 command_runner.go:130] >       ],
	I1212 22:24:56.540118   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.540203   96656 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 22:24:56.540220   96656 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1212 22:24:56.540231   96656 command_runner.go:130] >       ],
	I1212 22:24:56.540241   96656 command_runner.go:130] >       "size": "61551410",
	I1212 22:24:56.540247   96656 command_runner.go:130] >       "uid": {
	I1212 22:24:56.540254   96656 command_runner.go:130] >         "value": "0"
	I1212 22:24:56.540262   96656 command_runner.go:130] >       },
	I1212 22:24:56.540269   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.540279   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.540285   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.540294   96656 command_runner.go:130] >     },
	I1212 22:24:56.540300   96656 command_runner.go:130] >     {
	I1212 22:24:56.540313   96656 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 22:24:56.540322   96656 command_runner.go:130] >       "repoTags": [
	I1212 22:24:56.540330   96656 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 22:24:56.540338   96656 command_runner.go:130] >       ],
	I1212 22:24:56.540349   96656 command_runner.go:130] >       "repoDigests": [
	I1212 22:24:56.540363   96656 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 22:24:56.540377   96656 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 22:24:56.540389   96656 command_runner.go:130] >       ],
	I1212 22:24:56.540399   96656 command_runner.go:130] >       "size": "750414",
	I1212 22:24:56.540405   96656 command_runner.go:130] >       "uid": {
	I1212 22:24:56.540412   96656 command_runner.go:130] >         "value": "65535"
	I1212 22:24:56.540418   96656 command_runner.go:130] >       },
	I1212 22:24:56.540424   96656 command_runner.go:130] >       "username": "",
	I1212 22:24:56.540431   96656 command_runner.go:130] >       "spec": null,
	I1212 22:24:56.540438   96656 command_runner.go:130] >       "pinned": false
	I1212 22:24:56.540443   96656 command_runner.go:130] >     }
	I1212 22:24:56.540450   96656 command_runner.go:130] >   ]
	I1212 22:24:56.540455   96656 command_runner.go:130] > }
	I1212 22:24:56.540615   96656 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:24:56.540633   96656 cache_images.go:84] Images are preloaded, skipping loading
	I1212 22:24:56.540729   96656 ssh_runner.go:195] Run: crio config
	I1212 22:24:56.589218   96656 command_runner.go:130] ! time="2023-12-12 22:24:56.568124111Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 22:24:56.589327   96656 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 22:24:56.600734   96656 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 22:24:56.600763   96656 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 22:24:56.600769   96656 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 22:24:56.600773   96656 command_runner.go:130] > #
	I1212 22:24:56.600779   96656 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 22:24:56.600785   96656 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 22:24:56.600792   96656 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 22:24:56.600798   96656 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 22:24:56.600802   96656 command_runner.go:130] > # reload'.
	I1212 22:24:56.600809   96656 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 22:24:56.600815   96656 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 22:24:56.600821   96656 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 22:24:56.600826   96656 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 22:24:56.600830   96656 command_runner.go:130] > [crio]
	I1212 22:24:56.600844   96656 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 22:24:56.600851   96656 command_runner.go:130] > # containers images, in this directory.
	I1212 22:24:56.600862   96656 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 22:24:56.600877   96656 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 22:24:56.600889   96656 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 22:24:56.600899   96656 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 22:24:56.600906   96656 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 22:24:56.600911   96656 command_runner.go:130] > storage_driver = "overlay"
	I1212 22:24:56.600919   96656 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 22:24:56.600925   96656 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 22:24:56.600929   96656 command_runner.go:130] > storage_option = [
	I1212 22:24:56.600934   96656 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 22:24:56.600941   96656 command_runner.go:130] > ]
	I1212 22:24:56.600946   96656 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 22:24:56.600959   96656 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 22:24:56.600979   96656 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 22:24:56.600989   96656 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 22:24:56.600995   96656 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 22:24:56.601004   96656 command_runner.go:130] > # always happen on a node reboot
	I1212 22:24:56.601009   96656 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 22:24:56.601018   96656 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 22:24:56.601023   96656 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 22:24:56.601036   96656 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 22:24:56.601048   96656 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 22:24:56.601061   96656 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 22:24:56.601077   96656 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 22:24:56.601085   96656 command_runner.go:130] > # internal_wipe = true
	I1212 22:24:56.601096   96656 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 22:24:56.601102   96656 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 22:24:56.601110   96656 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 22:24:56.601115   96656 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 22:24:56.601121   96656 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 22:24:56.601128   96656 command_runner.go:130] > [crio.api]
	I1212 22:24:56.601137   96656 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 22:24:56.601148   96656 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 22:24:56.601158   96656 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 22:24:56.601171   96656 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 22:24:56.601182   96656 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 22:24:56.601193   96656 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 22:24:56.601197   96656 command_runner.go:130] > # stream_port = "0"
	I1212 22:24:56.601203   96656 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 22:24:56.601210   96656 command_runner.go:130] > # stream_enable_tls = false
	I1212 22:24:56.601220   96656 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 22:24:56.601230   96656 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 22:24:56.601242   96656 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 22:24:56.601255   96656 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 22:24:56.601262   96656 command_runner.go:130] > # minutes.
	I1212 22:24:56.601272   96656 command_runner.go:130] > # stream_tls_cert = ""
	I1212 22:24:56.601291   96656 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 22:24:56.601313   96656 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 22:24:56.601321   96656 command_runner.go:130] > # stream_tls_key = ""
	I1212 22:24:56.601334   96656 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 22:24:56.601347   96656 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 22:24:56.601359   96656 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 22:24:56.601373   96656 command_runner.go:130] > # stream_tls_ca = ""
	I1212 22:24:56.601385   96656 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:24:56.601395   96656 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 22:24:56.601410   96656 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:24:56.601421   96656 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 22:24:56.601455   96656 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 22:24:56.601469   96656 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 22:24:56.601473   96656 command_runner.go:130] > [crio.runtime]
	I1212 22:24:56.601482   96656 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 22:24:56.601495   96656 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 22:24:56.601505   96656 command_runner.go:130] > # "nofile=1024:2048"
	I1212 22:24:56.601518   96656 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 22:24:56.601525   96656 command_runner.go:130] > # default_ulimits = [
	I1212 22:24:56.601531   96656 command_runner.go:130] > # ]
	I1212 22:24:56.601541   96656 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 22:24:56.601550   96656 command_runner.go:130] > # no_pivot = false
	I1212 22:24:56.601556   96656 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 22:24:56.601569   96656 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 22:24:56.601587   96656 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 22:24:56.601600   96656 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 22:24:56.601611   96656 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 22:24:56.601622   96656 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:24:56.601633   96656 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 22:24:56.601641   96656 command_runner.go:130] > # Cgroup setting for conmon
	I1212 22:24:56.601650   96656 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 22:24:56.601660   96656 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 22:24:56.601674   96656 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 22:24:56.601686   96656 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 22:24:56.601701   96656 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:24:56.601710   96656 command_runner.go:130] > conmon_env = [
	I1212 22:24:56.601722   96656 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 22:24:56.601728   96656 command_runner.go:130] > ]
	I1212 22:24:56.601735   96656 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 22:24:56.601747   96656 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 22:24:56.601760   96656 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 22:24:56.601770   96656 command_runner.go:130] > # default_env = [
	I1212 22:24:56.601780   96656 command_runner.go:130] > # ]
	I1212 22:24:56.601793   96656 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 22:24:56.601802   96656 command_runner.go:130] > # selinux = false
	I1212 22:24:56.601813   96656 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 22:24:56.601825   96656 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 22:24:56.601838   96656 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 22:24:56.601849   96656 command_runner.go:130] > # seccomp_profile = ""
	I1212 22:24:56.601861   96656 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 22:24:56.601873   96656 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 22:24:56.601886   96656 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 22:24:56.601895   96656 command_runner.go:130] > # which might increase security.
	I1212 22:24:56.601901   96656 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 22:24:56.601914   96656 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 22:24:56.601928   96656 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 22:24:56.601942   96656 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 22:24:56.601955   96656 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 22:24:56.601964   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:24:56.601974   96656 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 22:24:56.601987   96656 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 22:24:56.601998   96656 command_runner.go:130] > # the cgroup blockio controller.
	I1212 22:24:56.602009   96656 command_runner.go:130] > # blockio_config_file = ""
	I1212 22:24:56.602023   96656 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 22:24:56.602033   96656 command_runner.go:130] > # irqbalance daemon.
	I1212 22:24:56.602042   96656 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 22:24:56.602055   96656 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 22:24:56.602064   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:24:56.602071   96656 command_runner.go:130] > # rdt_config_file = ""
	I1212 22:24:56.602077   96656 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 22:24:56.602085   96656 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 22:24:56.602096   96656 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 22:24:56.602106   96656 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 22:24:56.602117   96656 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 22:24:56.602130   96656 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 22:24:56.602140   96656 command_runner.go:130] > # will be added.
	I1212 22:24:56.602147   96656 command_runner.go:130] > # default_capabilities = [
	I1212 22:24:56.602152   96656 command_runner.go:130] > # 	"CHOWN",
	I1212 22:24:56.602163   96656 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 22:24:56.602169   96656 command_runner.go:130] > # 	"FSETID",
	I1212 22:24:56.602179   96656 command_runner.go:130] > # 	"FOWNER",
	I1212 22:24:56.602186   96656 command_runner.go:130] > # 	"SETGID",
	I1212 22:24:56.602195   96656 command_runner.go:130] > # 	"SETUID",
	I1212 22:24:56.602203   96656 command_runner.go:130] > # 	"SETPCAP",
	I1212 22:24:56.602212   96656 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 22:24:56.602222   96656 command_runner.go:130] > # 	"KILL",
	I1212 22:24:56.602236   96656 command_runner.go:130] > # ]
	I1212 22:24:56.602244   96656 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 22:24:56.602255   96656 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:24:56.602266   96656 command_runner.go:130] > # default_sysctls = [
	I1212 22:24:56.602276   96656 command_runner.go:130] > # ]
	I1212 22:24:56.602287   96656 command_runner.go:130] > # List of devices on the host that a
	I1212 22:24:56.602300   96656 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 22:24:56.602315   96656 command_runner.go:130] > # allowed_devices = [
	I1212 22:24:56.602324   96656 command_runner.go:130] > # 	"/dev/fuse",
	I1212 22:24:56.602328   96656 command_runner.go:130] > # ]
	I1212 22:24:56.602340   96656 command_runner.go:130] > # List of additional devices. specified as
	I1212 22:24:56.602356   96656 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 22:24:56.602368   96656 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 22:24:56.602410   96656 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:24:56.602418   96656 command_runner.go:130] > # additional_devices = [
	I1212 22:24:56.602423   96656 command_runner.go:130] > # ]
	I1212 22:24:56.602431   96656 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 22:24:56.602442   96656 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 22:24:56.602452   96656 command_runner.go:130] > # 	"/etc/cdi",
	I1212 22:24:56.602462   96656 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 22:24:56.602474   96656 command_runner.go:130] > # ]
	I1212 22:24:56.602488   96656 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 22:24:56.602500   96656 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 22:24:56.602507   96656 command_runner.go:130] > # Defaults to false.
	I1212 22:24:56.602515   96656 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 22:24:56.602530   96656 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 22:24:56.602543   96656 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 22:24:56.602552   96656 command_runner.go:130] > # hooks_dir = [
	I1212 22:24:56.602566   96656 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 22:24:56.602575   96656 command_runner.go:130] > # ]
	I1212 22:24:56.602587   96656 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 22:24:56.602598   96656 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 22:24:56.602607   96656 command_runner.go:130] > # its default mounts from the following two files:
	I1212 22:24:56.602616   96656 command_runner.go:130] > #
	I1212 22:24:56.602626   96656 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 22:24:56.602639   96656 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 22:24:56.602656   96656 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 22:24:56.602664   96656 command_runner.go:130] > #
	I1212 22:24:56.602674   96656 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 22:24:56.602684   96656 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 22:24:56.602698   96656 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 22:24:56.602710   96656 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 22:24:56.602718   96656 command_runner.go:130] > #
	I1212 22:24:56.602726   96656 command_runner.go:130] > # default_mounts_file = ""
	I1212 22:24:56.602738   96656 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 22:24:56.602756   96656 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 22:24:56.602765   96656 command_runner.go:130] > pids_limit = 1024
	I1212 22:24:56.602775   96656 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 22:24:56.602789   96656 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 22:24:56.602802   96656 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 22:24:56.602817   96656 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 22:24:56.602828   96656 command_runner.go:130] > # log_size_max = -1
	I1212 22:24:56.602842   96656 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 22:24:56.602848   96656 command_runner.go:130] > # log_to_journald = false
	I1212 22:24:56.602857   96656 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 22:24:56.602869   96656 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 22:24:56.602881   96656 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 22:24:56.602898   96656 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 22:24:56.602910   96656 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 22:24:56.602920   96656 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 22:24:56.602931   96656 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 22:24:56.602939   96656 command_runner.go:130] > # read_only = false
	I1212 22:24:56.602949   96656 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 22:24:56.602963   96656 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 22:24:56.602977   96656 command_runner.go:130] > # live configuration reload.
	I1212 22:24:56.602987   96656 command_runner.go:130] > # log_level = "info"
	I1212 22:24:56.603000   96656 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 22:24:56.603011   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:24:56.603019   96656 command_runner.go:130] > # log_filter = ""
	I1212 22:24:56.603027   96656 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 22:24:56.603037   96656 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 22:24:56.603048   96656 command_runner.go:130] > # separated by comma.
	I1212 22:24:56.603055   96656 command_runner.go:130] > # uid_mappings = ""
	I1212 22:24:56.603068   96656 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 22:24:56.603081   96656 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 22:24:56.603091   96656 command_runner.go:130] > # separated by comma.
	I1212 22:24:56.603097   96656 command_runner.go:130] > # gid_mappings = ""
	I1212 22:24:56.603107   96656 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 22:24:56.603115   96656 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:24:56.603134   96656 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:24:56.603150   96656 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 22:24:56.603164   96656 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 22:24:56.603180   96656 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:24:56.603191   96656 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:24:56.603199   96656 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 22:24:56.603209   96656 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 22:24:56.603222   96656 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 22:24:56.603236   96656 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 22:24:56.603255   96656 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 22:24:56.603265   96656 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 22:24:56.603276   96656 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 22:24:56.603287   96656 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 22:24:56.603298   96656 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 22:24:56.603312   96656 command_runner.go:130] > drop_infra_ctr = false
	I1212 22:24:56.603325   96656 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 22:24:56.603338   96656 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 22:24:56.603353   96656 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 22:24:56.603363   96656 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 22:24:56.603376   96656 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 22:24:56.603388   96656 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 22:24:56.603401   96656 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 22:24:56.603416   96656 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 22:24:56.603427   96656 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 22:24:56.603440   96656 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 22:24:56.603454   96656 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 22:24:56.603467   96656 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 22:24:56.603477   96656 command_runner.go:130] > # default_runtime = "runc"
	I1212 22:24:56.603485   96656 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 22:24:56.603497   96656 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 22:24:56.603515   96656 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 22:24:56.603526   96656 command_runner.go:130] > # creation as a file is not desired either.
	I1212 22:24:56.603539   96656 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 22:24:56.603551   96656 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 22:24:56.603561   96656 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 22:24:56.603568   96656 command_runner.go:130] > # ]
	I1212 22:24:56.603576   96656 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 22:24:56.603590   96656 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 22:24:56.603604   96656 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 22:24:56.603621   96656 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 22:24:56.603629   96656 command_runner.go:130] > #
	I1212 22:24:56.603637   96656 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 22:24:56.603648   96656 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 22:24:56.603655   96656 command_runner.go:130] > #  runtime_type = "oci"
	I1212 22:24:56.603661   96656 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 22:24:56.603673   96656 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 22:24:56.603684   96656 command_runner.go:130] > #  allowed_annotations = []
	I1212 22:24:56.603693   96656 command_runner.go:130] > # Where:
	I1212 22:24:56.603705   96656 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 22:24:56.603718   96656 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 22:24:56.603731   96656 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 22:24:56.603741   96656 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 22:24:56.603749   96656 command_runner.go:130] > #   in $PATH.
	I1212 22:24:56.603764   96656 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 22:24:56.603775   96656 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 22:24:56.603786   96656 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 22:24:56.603795   96656 command_runner.go:130] > #   state.
	I1212 22:24:56.603810   96656 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 22:24:56.603821   96656 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 22:24:56.603831   96656 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 22:24:56.603843   96656 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 22:24:56.603868   96656 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 22:24:56.603882   96656 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 22:24:56.603893   96656 command_runner.go:130] > #   The currently recognized values are:
	I1212 22:24:56.603906   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 22:24:56.603916   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 22:24:56.603929   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 22:24:56.603943   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 22:24:56.603962   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 22:24:56.603975   96656 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 22:24:56.603988   96656 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 22:24:56.603999   96656 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 22:24:56.604008   96656 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 22:24:56.604019   96656 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 22:24:56.604028   96656 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 22:24:56.604042   96656 command_runner.go:130] > runtime_type = "oci"
	I1212 22:24:56.604053   96656 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 22:24:56.604060   96656 command_runner.go:130] > runtime_config_path = ""
	I1212 22:24:56.604070   96656 command_runner.go:130] > monitor_path = ""
	I1212 22:24:56.604077   96656 command_runner.go:130] > monitor_cgroup = ""
	I1212 22:24:56.604085   96656 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 22:24:56.604092   96656 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 22:24:56.604102   96656 command_runner.go:130] > # running containers
	I1212 22:24:56.604110   96656 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 22:24:56.604124   96656 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 22:24:56.604219   96656 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 22:24:56.604239   96656 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 22:24:56.604248   96656 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 22:24:56.604256   96656 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 22:24:56.604264   96656 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 22:24:56.604271   96656 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 22:24:56.604283   96656 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 22:24:56.604294   96656 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 22:24:56.604317   96656 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 22:24:56.604329   96656 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 22:24:56.604342   96656 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 22:24:56.604352   96656 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 22:24:56.604368   96656 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 22:24:56.604382   96656 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 22:24:56.604400   96656 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 22:24:56.604416   96656 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 22:24:56.604428   96656 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 22:24:56.604438   96656 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 22:24:56.604447   96656 command_runner.go:130] > # Example:
	I1212 22:24:56.604456   96656 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 22:24:56.604468   96656 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 22:24:56.604479   96656 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 22:24:56.604491   96656 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 22:24:56.604500   96656 command_runner.go:130] > # cpuset = 0
	I1212 22:24:56.604507   96656 command_runner.go:130] > # cpushares = "0-1"
	I1212 22:24:56.604515   96656 command_runner.go:130] > # Where:
	I1212 22:24:56.604524   96656 command_runner.go:130] > # The workload name is workload-type.
	I1212 22:24:56.604538   96656 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 22:24:56.604552   96656 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 22:24:56.604565   96656 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 22:24:56.604580   96656 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 22:24:56.604593   96656 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 22:24:56.604602   96656 command_runner.go:130] > # 
	I1212 22:24:56.604612   96656 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 22:24:56.604618   96656 command_runner.go:130] > #
	I1212 22:24:56.604631   96656 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 22:24:56.604644   96656 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 22:24:56.604658   96656 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 22:24:56.604671   96656 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 22:24:56.604684   96656 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 22:24:56.604692   96656 command_runner.go:130] > [crio.image]
	I1212 22:24:56.604698   96656 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 22:24:56.604708   96656 command_runner.go:130] > # default_transport = "docker://"
	I1212 22:24:56.604722   96656 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 22:24:56.604740   96656 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:24:56.604750   96656 command_runner.go:130] > # global_auth_file = ""
	I1212 22:24:56.604762   96656 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 22:24:56.604774   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:24:56.604779   96656 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 22:24:56.604786   96656 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 22:24:56.604796   96656 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:24:56.604804   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:24:56.604812   96656 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 22:24:56.604822   96656 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 22:24:56.604831   96656 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 22:24:56.604841   96656 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 22:24:56.604851   96656 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 22:24:56.604857   96656 command_runner.go:130] > # pause_command = "/pause"
	I1212 22:24:56.604866   96656 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 22:24:56.604874   96656 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 22:24:56.604884   96656 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 22:24:56.604894   96656 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 22:24:56.604907   96656 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 22:24:56.604914   96656 command_runner.go:130] > # signature_policy = ""
	I1212 22:24:56.604926   96656 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 22:24:56.604936   96656 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 22:24:56.604943   96656 command_runner.go:130] > # changing them here.
	I1212 22:24:56.604949   96656 command_runner.go:130] > # insecure_registries = [
	I1212 22:24:56.604953   96656 command_runner.go:130] > # ]
	I1212 22:24:56.604965   96656 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 22:24:56.604976   96656 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 22:24:56.604988   96656 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 22:24:56.605000   96656 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 22:24:56.605011   96656 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 22:24:56.605024   96656 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 22:24:56.605031   96656 command_runner.go:130] > # CNI plugins.
	I1212 22:24:56.605039   96656 command_runner.go:130] > [crio.network]
	I1212 22:24:56.605045   96656 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 22:24:56.605054   96656 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 22:24:56.605061   96656 command_runner.go:130] > # cni_default_network = ""
	I1212 22:24:56.605072   96656 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 22:24:56.605084   96656 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 22:24:56.605096   96656 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 22:24:56.605106   96656 command_runner.go:130] > # plugin_dirs = [
	I1212 22:24:56.605113   96656 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 22:24:56.605119   96656 command_runner.go:130] > # ]
	I1212 22:24:56.605131   96656 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 22:24:56.605138   96656 command_runner.go:130] > [crio.metrics]
	I1212 22:24:56.605147   96656 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 22:24:56.605151   96656 command_runner.go:130] > enable_metrics = true
	I1212 22:24:56.605156   96656 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 22:24:56.605161   96656 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 22:24:56.605170   96656 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 22:24:56.605176   96656 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 22:24:56.605184   96656 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 22:24:56.605188   96656 command_runner.go:130] > # metrics_collectors = [
	I1212 22:24:56.605192   96656 command_runner.go:130] > # 	"operations",
	I1212 22:24:56.605197   96656 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 22:24:56.605204   96656 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 22:24:56.605211   96656 command_runner.go:130] > # 	"operations_errors",
	I1212 22:24:56.605218   96656 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 22:24:56.605228   96656 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 22:24:56.605240   96656 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 22:24:56.605251   96656 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 22:24:56.605261   96656 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 22:24:56.605271   96656 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 22:24:56.605286   96656 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 22:24:56.605292   96656 command_runner.go:130] > # 	"containers_oom_total",
	I1212 22:24:56.605300   96656 command_runner.go:130] > # 	"containers_oom",
	I1212 22:24:56.605308   96656 command_runner.go:130] > # 	"processes_defunct",
	I1212 22:24:56.605333   96656 command_runner.go:130] > # 	"operations_total",
	I1212 22:24:56.605340   96656 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 22:24:56.605345   96656 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 22:24:56.605352   96656 command_runner.go:130] > # 	"operations_errors_total",
	I1212 22:24:56.605357   96656 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 22:24:56.605363   96656 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 22:24:56.605370   96656 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 22:24:56.605377   96656 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 22:24:56.605382   96656 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 22:24:56.605389   96656 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 22:24:56.605392   96656 command_runner.go:130] > # ]
	I1212 22:24:56.605400   96656 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 22:24:56.605404   96656 command_runner.go:130] > # metrics_port = 9090
	I1212 22:24:56.605411   96656 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 22:24:56.605415   96656 command_runner.go:130] > # metrics_socket = ""
	I1212 22:24:56.605422   96656 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 22:24:56.605428   96656 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 22:24:56.605437   96656 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 22:24:56.605451   96656 command_runner.go:130] > # certificate on any modification event.
	I1212 22:24:56.605461   96656 command_runner.go:130] > # metrics_cert = ""
	I1212 22:24:56.605471   96656 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 22:24:56.605479   96656 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 22:24:56.605483   96656 command_runner.go:130] > # metrics_key = ""
	I1212 22:24:56.605493   96656 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 22:24:56.605507   96656 command_runner.go:130] > [crio.tracing]
	I1212 22:24:56.605515   96656 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 22:24:56.605522   96656 command_runner.go:130] > # enable_tracing = false
	I1212 22:24:56.605527   96656 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 22:24:56.605534   96656 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 22:24:56.605539   96656 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 22:24:56.605546   96656 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 22:24:56.605552   96656 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 22:24:56.605558   96656 command_runner.go:130] > [crio.stats]
	I1212 22:24:56.605564   96656 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 22:24:56.605571   96656 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 22:24:56.605575   96656 command_runner.go:130] > # stats_collection_period = 0
	I1212 22:24:56.605654   96656 cni.go:84] Creating CNI manager for ""
	I1212 22:24:56.605665   96656 cni.go:136] 1 nodes found, recommending kindnet
	I1212 22:24:56.605684   96656 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:24:56.605729   96656 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-054207 NodeName:multinode-054207 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:24:56.605890   96656 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-054207"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:24:56.605966   96656 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-054207 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:24:56.606020   96656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:24:56.615357   96656 command_runner.go:130] > kubeadm
	I1212 22:24:56.615380   96656 command_runner.go:130] > kubectl
	I1212 22:24:56.615385   96656 command_runner.go:130] > kubelet
	I1212 22:24:56.615405   96656 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:24:56.615472   96656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:24:56.623995   96656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1212 22:24:56.639830   96656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:24:56.657289   96656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1212 22:24:56.674816   96656 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1212 22:24:56.678857   96656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:24:56.692547   96656 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207 for IP: 192.168.39.172
	I1212 22:24:56.692581   96656 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:24:56.692730   96656 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 22:24:56.692769   96656 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 22:24:56.692810   96656 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key
	I1212 22:24:56.692825   96656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt with IP's: []
	I1212 22:24:56.879965   96656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt ...
	I1212 22:24:56.879999   96656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt: {Name:mk35e701957f7c2bf02da487b684e76763ff54f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:24:56.880184   96656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key ...
	I1212 22:24:56.880194   96656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key: {Name:mk9a0954a58545c8975d7d319724b6caaacc0cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:24:56.880271   96656 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key.ee96354a
	I1212 22:24:56.880291   96656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt.ee96354a with IP's: [192.168.39.172 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:24:57.075030   96656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt.ee96354a ...
	I1212 22:24:57.075061   96656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt.ee96354a: {Name:mk8c4d0fe8cb70af740c451109b569620b99e03c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:24:57.075267   96656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key.ee96354a ...
	I1212 22:24:57.075283   96656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key.ee96354a: {Name:mkd896e84adf124527601d1a8e49640e324af4da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:24:57.075359   96656 certs.go:337] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt.ee96354a -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt
	I1212 22:24:57.075441   96656 certs.go:341] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key.ee96354a -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key
	I1212 22:24:57.075500   96656 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.key
	I1212 22:24:57.075514   96656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.crt with IP's: []
	I1212 22:24:57.288883   96656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.crt ...
	I1212 22:24:57.288921   96656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.crt: {Name:mke03f7ed479ccf3224ff66db3b244a49200a0ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:24:57.289095   96656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.key ...
	I1212 22:24:57.289109   96656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.key: {Name:mk1d009f5714cbc90ffa6805164f41be66851167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:24:57.289182   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 22:24:57.289201   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 22:24:57.289211   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 22:24:57.289223   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 22:24:57.289232   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:24:57.289244   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:24:57.289255   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:24:57.289269   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:24:57.289315   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 22:24:57.289351   96656 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 22:24:57.289363   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 22:24:57.289386   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:24:57.289410   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:24:57.289433   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 22:24:57.289469   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:24:57.289496   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem -> /usr/share/ca-certificates/83825.pem
	I1212 22:24:57.289509   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /usr/share/ca-certificates/838252.pem
	I1212 22:24:57.289521   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:24:57.290049   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:24:57.322368   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 22:24:57.349698   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:24:57.371941   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:24:57.395877   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:24:57.419595   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 22:24:57.442334   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:24:57.464818   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 22:24:57.487601   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 22:24:57.510649   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 22:24:57.533432   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:24:57.556558   96656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:24:57.573911   96656 ssh_runner.go:195] Run: openssl version
	I1212 22:24:57.579660   96656 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 22:24:57.579754   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:24:57.590699   96656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:24:57.595367   96656 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:24:57.595531   96656 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:24:57.595607   96656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:24:57.601009   96656 command_runner.go:130] > b5213941
	I1212 22:24:57.601379   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:24:57.612499   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 22:24:57.623548   96656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 22:24:57.628040   96656 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:24:57.628384   96656 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:24:57.628446   96656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 22:24:57.633627   96656 command_runner.go:130] > 51391683
	I1212 22:24:57.633975   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 22:24:57.644409   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 22:24:57.654895   96656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 22:24:57.659278   96656 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:24:57.659484   96656 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:24:57.659532   96656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 22:24:57.664657   96656 command_runner.go:130] > 3ec20f2e
	I1212 22:24:57.664860   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:24:57.675049   96656 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:24:57.679340   96656 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:24:57.679384   96656 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:24:57.679443   96656 kubeadm.go:404] StartCluster: {Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:24:57.679519   96656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:24:57.679599   96656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:24:57.717925   96656 cri.go:89] found id: ""
	I1212 22:24:57.718013   96656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:24:57.727504   96656 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 22:24:57.727538   96656 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 22:24:57.727549   96656 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 22:24:57.727679   96656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:24:57.736926   96656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:24:57.746917   96656 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 22:24:57.746953   96656 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 22:24:57.746965   96656 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 22:24:57.746978   96656 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:24:57.747038   96656 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:24:57.747072   96656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 22:24:57.869968   96656 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 22:24:57.870011   96656 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 22:24:57.870091   96656 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:24:57.870100   96656 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 22:24:58.127315   96656 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:24:58.127346   96656 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:24:58.127459   96656 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:24:58.127471   96656 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:24:58.127634   96656 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:24:58.127665   96656 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:24:58.364721   96656 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:24:58.364753   96656 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:24:58.455439   96656 out.go:204]   - Generating certificates and keys ...
	I1212 22:24:58.455573   96656 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 22:24:58.455587   96656 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:24:58.455660   96656 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 22:24:58.455673   96656 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:24:58.592642   96656 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:24:58.592690   96656 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:24:58.814339   96656 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:24:58.814369   96656 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:24:59.068190   96656 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:24:59.068253   96656 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 22:24:59.128103   96656 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:24:59.128153   96656 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 22:24:59.450216   96656 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:24:59.450245   96656 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 22:24:59.450361   96656 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-054207] and IPs [192.168.39.172 127.0.0.1 ::1]
	I1212 22:24:59.450398   96656 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-054207] and IPs [192.168.39.172 127.0.0.1 ::1]
	I1212 22:24:59.628187   96656 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:24:59.628217   96656 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 22:24:59.628679   96656 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-054207] and IPs [192.168.39.172 127.0.0.1 ::1]
	I1212 22:24:59.628706   96656 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-054207] and IPs [192.168.39.172 127.0.0.1 ::1]
	I1212 22:25:00.065481   96656 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:25:00.065518   96656 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:25:00.198726   96656 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:25:00.198750   96656 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:25:00.311487   96656 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:25:00.311509   96656 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 22:25:00.311610   96656 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:25:00.311641   96656 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:25:00.411094   96656 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:25:00.411133   96656 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:25:00.554674   96656 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:25:00.554717   96656 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:25:00.765071   96656 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:25:00.765113   96656 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:25:00.878773   96656 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:25:00.878813   96656 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:25:00.879603   96656 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:25:00.879621   96656 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:25:00.884781   96656 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:25:00.886870   96656 out.go:204]   - Booting up control plane ...
	I1212 22:25:00.884816   96656 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:25:00.887034   96656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:25:00.887058   96656 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:25:00.887145   96656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:25:00.887164   96656 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:25:00.887232   96656 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:25:00.887253   96656 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:25:00.902533   96656 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:25:00.902553   96656 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:25:00.907077   96656 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:25:00.907092   96656 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:25:00.907352   96656 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:25:00.907364   96656 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 22:25:01.030472   96656 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:25:01.030507   96656 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:25:09.031052   96656 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005665 seconds
	I1212 22:25:09.031085   96656 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.005665 seconds
	I1212 22:25:09.031211   96656 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:25:09.031227   96656 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:25:09.057703   96656 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:25:09.057736   96656 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:25:09.588360   96656 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:25:09.588390   96656 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:25:09.588579   96656 kubeadm.go:322] [mark-control-plane] Marking the node multinode-054207 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:25:09.588589   96656 command_runner.go:130] > [mark-control-plane] Marking the node multinode-054207 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:25:10.107090   96656 kubeadm.go:322] [bootstrap-token] Using token: g39ynm.55dw044e0t10yz8w
	I1212 22:25:10.108520   96656 out.go:204]   - Configuring RBAC rules ...
	I1212 22:25:10.107175   96656 command_runner.go:130] > [bootstrap-token] Using token: g39ynm.55dw044e0t10yz8w
	I1212 22:25:10.108634   96656 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:25:10.108648   96656 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:25:10.116112   96656 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:25:10.116158   96656 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:25:10.126514   96656 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:25:10.126552   96656 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:25:10.131386   96656 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:25:10.131408   96656 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:25:10.144678   96656 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:25:10.144712   96656 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:25:10.153646   96656 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:25:10.153676   96656 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:25:10.172308   96656 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:25:10.172338   96656 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:25:10.433529   96656 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:25:10.433560   96656 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 22:25:10.525517   96656 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:25:10.525545   96656 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 22:25:10.526092   96656 kubeadm.go:322] 
	I1212 22:25:10.526161   96656 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:25:10.526184   96656 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 22:25:10.526205   96656 kubeadm.go:322] 
	I1212 22:25:10.526312   96656 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:25:10.526315   96656 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 22:25:10.526335   96656 kubeadm.go:322] 
	I1212 22:25:10.526380   96656 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:25:10.526394   96656 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 22:25:10.526459   96656 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:25:10.526458   96656 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:25:10.526539   96656 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:25:10.526549   96656 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:25:10.526554   96656 kubeadm.go:322] 
	I1212 22:25:10.526627   96656 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 22:25:10.526641   96656 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 22:25:10.526651   96656 kubeadm.go:322] 
	I1212 22:25:10.526724   96656 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:25:10.526733   96656 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:25:10.526737   96656 kubeadm.go:322] 
	I1212 22:25:10.526774   96656 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:25:10.526781   96656 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 22:25:10.526877   96656 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:25:10.526883   96656 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:25:10.526984   96656 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:25:10.526996   96656 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:25:10.527002   96656 kubeadm.go:322] 
	I1212 22:25:10.527114   96656 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:25:10.527124   96656 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:25:10.527245   96656 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:25:10.527254   96656 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 22:25:10.527261   96656 kubeadm.go:322] 
	I1212 22:25:10.527436   96656 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g39ynm.55dw044e0t10yz8w \
	I1212 22:25:10.527456   96656 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token g39ynm.55dw044e0t10yz8w \
	I1212 22:25:10.527566   96656 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 22:25:10.527578   96656 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 22:25:10.527604   96656 kubeadm.go:322] 	--control-plane 
	I1212 22:25:10.527622   96656 command_runner.go:130] > 	--control-plane 
	I1212 22:25:10.527639   96656 kubeadm.go:322] 
	I1212 22:25:10.527726   96656 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:25:10.527734   96656 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:25:10.527737   96656 kubeadm.go:322] 
	I1212 22:25:10.527802   96656 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g39ynm.55dw044e0t10yz8w \
	I1212 22:25:10.527808   96656 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g39ynm.55dw044e0t10yz8w \
	I1212 22:25:10.527928   96656 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 22:25:10.527936   96656 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 22:25:10.529186   96656 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:25:10.529202   96656 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:25:10.529231   96656 cni.go:84] Creating CNI manager for ""
	I1212 22:25:10.529237   96656 cni.go:136] 1 nodes found, recommending kindnet
	I1212 22:25:10.532278   96656 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 22:25:10.533937   96656 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:25:10.555107   96656 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 22:25:10.555134   96656 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 22:25:10.555146   96656 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 22:25:10.555157   96656 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:25:10.555168   96656 command_runner.go:130] > Access: 2023-12-12 22:24:38.651278855 +0000
	I1212 22:25:10.555176   96656 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 22:25:10.555184   96656 command_runner.go:130] > Change: 2023-12-12 22:24:36.827278855 +0000
	I1212 22:25:10.555191   96656 command_runner.go:130] >  Birth: -
	I1212 22:25:10.555260   96656 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:25:10.555274   96656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:25:10.602339   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:25:11.709645   96656 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 22:25:11.709671   96656 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 22:25:11.709677   96656 command_runner.go:130] > serviceaccount/kindnet created
	I1212 22:25:11.709682   96656 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 22:25:11.709702   96656 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.107334803s)
	I1212 22:25:11.709743   96656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:25:11.709823   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:11.709864   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-054207 minikube.k8s.io/updated_at=2023_12_12T22_25_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:11.917657   96656 command_runner.go:130] > node/multinode-054207 labeled
	I1212 22:25:11.919428   96656 command_runner.go:130] > -16
	I1212 22:25:11.919455   96656 ops.go:34] apiserver oom_adj: -16
	I1212 22:25:11.919496   96656 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 22:25:11.919584   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:12.007437   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:12.007544   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:12.103444   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:12.605833   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:12.698028   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:13.105640   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:13.193855   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:13.605379   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:13.698560   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:14.105962   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:14.192396   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:14.605783   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:14.695109   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:15.105634   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:15.202282   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:15.605979   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:15.700014   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:16.105614   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:16.188311   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:16.605879   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:16.719688   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:17.105200   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:17.187075   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:17.605629   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:17.711347   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:18.105942   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:18.193396   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:18.605565   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:18.710257   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:19.105926   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:19.204365   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:19.605953   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:19.701313   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:20.106037   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:20.199848   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:20.606160   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:20.705464   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:21.106198   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:21.190071   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:21.605538   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:21.706855   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:22.105515   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:22.195003   96656 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:25:22.605204   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:25:22.802857   96656 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 22:25:22.802887   96656 command_runner.go:130] > default   0         0s
	I1212 22:25:22.804502   96656 kubeadm.go:1088] duration metric: took 11.094730113s to wait for elevateKubeSystemPrivileges.
	I1212 22:25:22.804536   96656 kubeadm.go:406] StartCluster complete in 25.125101984s
	I1212 22:25:22.804555   96656 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:25:22.804630   96656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:25:22.805406   96656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:25:22.805635   96656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:25:22.805790   96656 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 22:25:22.805860   96656 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:25:22.805869   96656 addons.go:69] Setting storage-provisioner=true in profile "multinode-054207"
	I1212 22:25:22.805894   96656 addons.go:231] Setting addon storage-provisioner=true in "multinode-054207"
	I1212 22:25:22.805916   96656 addons.go:69] Setting default-storageclass=true in profile "multinode-054207"
	I1212 22:25:22.805944   96656 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-054207"
	I1212 22:25:22.805971   96656 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:25:22.806013   96656 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:25:22.806329   96656 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:25:22.806432   96656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:25:22.806436   96656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:25:22.806463   96656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:25:22.806468   96656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:25:22.807070   96656 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 22:25:22.807379   96656 round_trippers.go:463] GET https://192.168.39.172:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:25:22.807398   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:22.807411   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:22.807425   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:22.821623   96656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I1212 22:25:22.822132   96656 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:25:22.822654   96656 main.go:141] libmachine: Using API Version  1
	I1212 22:25:22.822691   96656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:25:22.823034   96656 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:25:22.823266   96656 main.go:141] libmachine: (multinode-054207) Calling .GetState
	I1212 22:25:22.825372   96656 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:25:22.825616   96656 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:25:22.825912   96656 addons.go:231] Setting addon default-storageclass=true in "multinode-054207"
	I1212 22:25:22.825953   96656 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:25:22.826183   96656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I1212 22:25:22.826354   96656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:25:22.826403   96656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:25:22.826586   96656 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:25:22.827083   96656 main.go:141] libmachine: Using API Version  1
	I1212 22:25:22.827110   96656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:25:22.827476   96656 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:25:22.827929   96656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:25:22.827957   96656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:25:22.842053   96656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I1212 22:25:22.842388   96656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39585
	I1212 22:25:22.842562   96656 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:25:22.842816   96656 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:25:22.843138   96656 main.go:141] libmachine: Using API Version  1
	I1212 22:25:22.843159   96656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:25:22.843315   96656 main.go:141] libmachine: Using API Version  1
	I1212 22:25:22.843337   96656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:25:22.843571   96656 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:25:22.843762   96656 main.go:141] libmachine: (multinode-054207) Calling .GetState
	I1212 22:25:22.843768   96656 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:25:22.844363   96656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:25:22.844405   96656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:25:22.845548   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:25:22.847972   96656 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:25:22.849763   96656 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:25:22.849791   96656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:25:22.849815   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:25:22.853308   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:25:22.853807   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:25:22.853997   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:25:22.854016   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:25:22.854232   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:25:22.854442   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:25:22.854613   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:25:22.859887   96656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I1212 22:25:22.860354   96656 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:25:22.860826   96656 main.go:141] libmachine: Using API Version  1
	I1212 22:25:22.860847   96656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:25:22.861171   96656 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:25:22.861402   96656 main.go:141] libmachine: (multinode-054207) Calling .GetState
	I1212 22:25:22.863132   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:25:22.863149   96656 round_trippers.go:574] Response Status: 200 OK in 55 milliseconds
	I1212 22:25:22.863165   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:22.863174   96656 round_trippers.go:580]     Audit-Id: ad38465f-38b6-41fa-aae7-fafe58a50fdd
	I1212 22:25:22.863188   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:22.863201   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:22.863208   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:22.863217   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:22.863225   96656 round_trippers.go:580]     Content-Length: 291
	I1212 22:25:22.863235   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:22 GMT
	I1212 22:25:22.863296   96656 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6f2af7e-14ec-48d1-9818-c77045ad4244","resourceVersion":"364","creationTimestamp":"2023-12-12T22:25:10Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 22:25:22.863407   96656 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:25:22.863426   96656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:25:22.863446   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:25:22.863823   96656 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6f2af7e-14ec-48d1-9818-c77045ad4244","resourceVersion":"364","creationTimestamp":"2023-12-12T22:25:10Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 22:25:22.863901   96656 round_trippers.go:463] PUT https://192.168.39.172:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:25:22.863915   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:22.863926   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:22.863936   96656 round_trippers.go:473]     Content-Type: application/json
	I1212 22:25:22.863948   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:22.866308   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:25:22.866731   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:25:22.866763   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:25:22.867001   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:25:22.867196   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:25:22.867367   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:25:22.867492   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:25:22.885514   96656 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1212 22:25:22.885541   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:22.885560   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:22 GMT
	I1212 22:25:22.885566   96656 round_trippers.go:580]     Audit-Id: 3c85c582-0f19-4810-87d0-2abc09b02899
	I1212 22:25:22.885571   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:22.885576   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:22.885582   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:22.885587   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:22.885595   96656 round_trippers.go:580]     Content-Length: 291
	I1212 22:25:22.885631   96656 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6f2af7e-14ec-48d1-9818-c77045ad4244","resourceVersion":"371","creationTimestamp":"2023-12-12T22:25:10Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 22:25:22.885807   96656 round_trippers.go:463] GET https://192.168.39.172:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:25:22.885823   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:22.885834   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:22.885843   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:22.918088   96656 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I1212 22:25:22.918120   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:22.918131   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:22.918137   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:22.918145   96656 round_trippers.go:580]     Content-Length: 291
	I1212 22:25:22.918153   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:22 GMT
	I1212 22:25:22.918161   96656 round_trippers.go:580]     Audit-Id: 58f12103-6632-4154-b94a-7322f5969c01
	I1212 22:25:22.918169   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:22.918181   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:22.933745   96656 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6f2af7e-14ec-48d1-9818-c77045ad4244","resourceVersion":"371","creationTimestamp":"2023-12-12T22:25:10Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 22:25:22.933890   96656 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-054207" context rescaled to 1 replicas
	I1212 22:25:22.933931   96656 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:25:22.936906   96656 out.go:177] * Verifying Kubernetes components...
	I1212 22:25:22.938363   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:25:22.989027   96656 command_runner.go:130] > apiVersion: v1
	I1212 22:25:22.989057   96656 command_runner.go:130] > data:
	I1212 22:25:22.989064   96656 command_runner.go:130] >   Corefile: |
	I1212 22:25:22.989069   96656 command_runner.go:130] >     .:53 {
	I1212 22:25:22.989081   96656 command_runner.go:130] >         errors
	I1212 22:25:22.989090   96656 command_runner.go:130] >         health {
	I1212 22:25:22.989097   96656 command_runner.go:130] >            lameduck 5s
	I1212 22:25:22.989103   96656 command_runner.go:130] >         }
	I1212 22:25:22.989109   96656 command_runner.go:130] >         ready
	I1212 22:25:22.989122   96656 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 22:25:22.989133   96656 command_runner.go:130] >            pods insecure
	I1212 22:25:22.989141   96656 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 22:25:22.989152   96656 command_runner.go:130] >            ttl 30
	I1212 22:25:22.989161   96656 command_runner.go:130] >         }
	I1212 22:25:22.989171   96656 command_runner.go:130] >         prometheus :9153
	I1212 22:25:22.989182   96656 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 22:25:22.989194   96656 command_runner.go:130] >            max_concurrent 1000
	I1212 22:25:22.989216   96656 command_runner.go:130] >         }
	I1212 22:25:22.989227   96656 command_runner.go:130] >         cache 30
	I1212 22:25:22.989233   96656 command_runner.go:130] >         loop
	I1212 22:25:22.989240   96656 command_runner.go:130] >         reload
	I1212 22:25:22.989250   96656 command_runner.go:130] >         loadbalance
	I1212 22:25:22.989260   96656 command_runner.go:130] >     }
	I1212 22:25:22.989270   96656 command_runner.go:130] > kind: ConfigMap
	I1212 22:25:22.989279   96656 command_runner.go:130] > metadata:
	I1212 22:25:22.989291   96656 command_runner.go:130] >   creationTimestamp: "2023-12-12T22:25:10Z"
	I1212 22:25:22.989301   96656 command_runner.go:130] >   name: coredns
	I1212 22:25:22.989311   96656 command_runner.go:130] >   namespace: kube-system
	I1212 22:25:22.989321   96656 command_runner.go:130] >   resourceVersion: "267"
	I1212 22:25:22.989332   96656 command_runner.go:130] >   uid: 731b2461-d0ec-4a4c-8705-affc9d0f579b
	I1212 22:25:22.990668   96656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:25:22.991046   96656 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:25:22.991400   96656 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:25:22.991791   96656 node_ready.go:35] waiting up to 6m0s for node "multinode-054207" to be "Ready" ...
	I1212 22:25:22.991932   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:22.991946   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:22.991958   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:22.991971   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:23.002454   96656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:25:23.041569   96656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:25:23.084745   96656 round_trippers.go:574] Response Status: 200 OK in 92 milliseconds
	I1212 22:25:23.084771   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:23.084783   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:23.084792   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:23.084801   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:23.084817   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:23 GMT
	I1212 22:25:23.084826   96656 round_trippers.go:580]     Audit-Id: bd2e0d2a-bbe0-4054-9a46-58a1e3eba579
	I1212 22:25:23.084835   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:23.099936   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:23.100832   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:23.100859   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:23.100872   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:23.100888   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:23.169901   96656 round_trippers.go:574] Response Status: 200 OK in 68 milliseconds
	I1212 22:25:23.169935   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:23.169947   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:23.169955   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:23 GMT
	I1212 22:25:23.169963   96656 round_trippers.go:580]     Audit-Id: de0b2ba8-db6a-4ca1-9cf5-c637373355b5
	I1212 22:25:23.169979   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:23.169986   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:23.169994   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:23.172765   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:23.674112   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:23.674147   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:23.674161   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:23.674171   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:23.708432   96656 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I1212 22:25:23.708463   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:23.708475   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:23.708485   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:23.708493   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:23 GMT
	I1212 22:25:23.708500   96656 round_trippers.go:580]     Audit-Id: e6fc5882-4dc1-443d-9834-00f84e935ea7
	I1212 22:25:23.708508   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:23.708517   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:23.708700   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:23.791373   96656 command_runner.go:130] > configmap/coredns replaced
	I1212 22:25:23.791426   96656 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 22:25:23.956666   96656 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 22:25:23.965859   96656 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 22:25:23.977165   96656 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 22:25:23.991760   96656 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 22:25:23.999625   96656 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 22:25:24.009773   96656 command_runner.go:130] > pod/storage-provisioner created
	I1212 22:25:24.012476   96656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.009979626s)
	I1212 22:25:24.012518   96656 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 22:25:24.012533   96656 main.go:141] libmachine: Making call to close driver server
	I1212 22:25:24.012546   96656 main.go:141] libmachine: (multinode-054207) Calling .Close
	I1212 22:25:24.012566   96656 main.go:141] libmachine: Making call to close driver server
	I1212 22:25:24.012584   96656 main.go:141] libmachine: (multinode-054207) Calling .Close
	I1212 22:25:24.012865   96656 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:25:24.012881   96656 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:25:24.012910   96656 main.go:141] libmachine: Making call to close driver server
	I1212 22:25:24.012928   96656 main.go:141] libmachine: (multinode-054207) Calling .Close
	I1212 22:25:24.013008   96656 main.go:141] libmachine: (multinode-054207) DBG | Closing plugin on server side
	I1212 22:25:24.013014   96656 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:25:24.013079   96656 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:25:24.013097   96656 main.go:141] libmachine: Making call to close driver server
	I1212 22:25:24.013111   96656 main.go:141] libmachine: (multinode-054207) Calling .Close
	I1212 22:25:24.013154   96656 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:25:24.013174   96656 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:25:24.013402   96656 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:25:24.013423   96656 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:25:24.013511   96656 round_trippers.go:463] GET https://192.168.39.172:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 22:25:24.013516   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:24.013524   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:24.013530   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:24.025828   96656 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 22:25:24.025853   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:24.025861   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:24 GMT
	I1212 22:25:24.025867   96656 round_trippers.go:580]     Audit-Id: 77a80686-4794-46d0-8514-f7f6be36503f
	I1212 22:25:24.025872   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:24.025878   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:24.025883   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:24.025888   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:24.025894   96656 round_trippers.go:580]     Content-Length: 1273
	I1212 22:25:24.025970   96656 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"standard","uid":"d3df2743-b0c1-4219-ad9f-c1d97612463b","resourceVersion":"397","creationTimestamp":"2023-12-12T22:25:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T22:25:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 22:25:24.026500   96656 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d3df2743-b0c1-4219-ad9f-c1d97612463b","resourceVersion":"397","creationTimestamp":"2023-12-12T22:25:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T22:25:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 22:25:24.026562   96656 round_trippers.go:463] PUT https://192.168.39.172:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 22:25:24.026575   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:24.026587   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:24.026599   96656 round_trippers.go:473]     Content-Type: application/json
	I1212 22:25:24.026611   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:24.033875   96656 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 22:25:24.033894   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:24.033904   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:24.033913   96656 round_trippers.go:580]     Content-Length: 1220
	I1212 22:25:24.033921   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:24 GMT
	I1212 22:25:24.033930   96656 round_trippers.go:580]     Audit-Id: ec1fc34a-4a3d-4be7-b45b-20d58565601e
	I1212 22:25:24.033941   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:24.033946   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:24.033957   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:24.033990   96656 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d3df2743-b0c1-4219-ad9f-c1d97612463b","resourceVersion":"397","creationTimestamp":"2023-12-12T22:25:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T22:25:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 22:25:24.034148   96656 main.go:141] libmachine: Making call to close driver server
	I1212 22:25:24.034166   96656 main.go:141] libmachine: (multinode-054207) Calling .Close
	I1212 22:25:24.034445   96656 main.go:141] libmachine: Successfully made call to close driver server
	I1212 22:25:24.034465   96656 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 22:25:24.036121   96656 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 22:25:24.037486   96656 addons.go:502] enable addons completed in 1.231700524s: enabled=[storage-provisioner default-storageclass]
	I1212 22:25:24.174041   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:24.174065   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:24.174073   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:24.174079   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:24.182057   96656 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 22:25:24.182088   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:24.182099   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:24.182108   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:24 GMT
	I1212 22:25:24.182117   96656 round_trippers.go:580]     Audit-Id: 4b79ef97-bf70-4935-bb6e-a46ef3a31457
	I1212 22:25:24.182125   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:24.182133   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:24.182142   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:24.182401   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:24.674129   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:24.674172   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:24.674182   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:24.674187   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:24.680476   96656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 22:25:24.680510   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:24.680521   96656 round_trippers.go:580]     Audit-Id: a5ad6bec-4909-4167-94ca-820bdf92fae2
	I1212 22:25:24.680529   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:24.680537   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:24.680547   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:24.680556   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:24.680568   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:24 GMT
	I1212 22:25:24.680839   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:25.174171   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:25.174194   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:25.174203   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:25.174209   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:25.180850   96656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 22:25:25.180881   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:25.180891   96656 round_trippers.go:580]     Audit-Id: 5711dc68-1941-4734-ab48-dfa13e108d4b
	I1212 22:25:25.180898   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:25.180906   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:25.180914   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:25.180921   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:25.180932   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:25 GMT
	I1212 22:25:25.181116   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:25.181564   96656 node_ready.go:58] node "multinode-054207" has status "Ready":"False"
	I1212 22:25:25.673569   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:25.673598   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:25.673607   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:25.673613   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:25.676253   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:25.676280   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:25.676304   96656 round_trippers.go:580]     Audit-Id: f405b293-0560-40d7-91fc-25c286da3041
	I1212 22:25:25.676310   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:25.676316   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:25.676321   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:25.676326   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:25.676335   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:25 GMT
	I1212 22:25:25.676499   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:26.174216   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:26.174243   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:26.174253   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:26.174258   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:26.177818   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:26.177841   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:26.177848   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:26.177853   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:26 GMT
	I1212 22:25:26.177859   96656 round_trippers.go:580]     Audit-Id: 84fc9b76-633a-4fe7-bed2-c6541beca4d6
	I1212 22:25:26.177863   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:26.177881   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:26.177886   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:26.178093   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:26.673746   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:26.673775   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:26.673784   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:26.673791   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:26.676965   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:26.677000   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:26.677012   96656 round_trippers.go:580]     Audit-Id: d645c6ea-23c5-42e1-ace3-928105620c43
	I1212 22:25:26.677020   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:26.677029   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:26.677037   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:26.677046   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:26.677059   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:26 GMT
	I1212 22:25:26.677303   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:27.174057   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:27.174086   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:27.174095   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:27.174101   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:27.181049   96656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 22:25:27.181076   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:27.181084   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:27 GMT
	I1212 22:25:27.181090   96656 round_trippers.go:580]     Audit-Id: f0994cfd-e869-4ab3-97ee-9c01567700d5
	I1212 22:25:27.181095   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:27.181099   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:27.181104   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:27.181110   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:27.182419   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:27.182764   96656 node_ready.go:58] node "multinode-054207" has status "Ready":"False"
	I1212 22:25:27.674170   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:27.674192   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:27.674205   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:27.674211   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:27.677260   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:27.677287   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:27.677298   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:27.677306   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:27.677319   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:27.677331   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:27.677339   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:27 GMT
	I1212 22:25:27.677349   96656 round_trippers.go:580]     Audit-Id: 6cc12e0f-9d85-4fe4-9ec8-fff37304e15e
	I1212 22:25:27.677566   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:28.173735   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:28.173768   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:28.173779   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:28.173785   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:28.177845   96656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:25:28.177872   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:28.177901   96656 round_trippers.go:580]     Audit-Id: 023c11b4-0d92-41fc-b958-b8137c859221
	I1212 22:25:28.177910   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:28.177918   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:28.177929   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:28.177938   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:28.177948   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:28 GMT
	I1212 22:25:28.178542   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"340","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1212 22:25:28.674224   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:28.674249   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:28.674261   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:28.674269   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:28.677258   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:28.677285   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:28.677293   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:28.677299   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:28 GMT
	I1212 22:25:28.677304   96656 round_trippers.go:580]     Audit-Id: 8f08d5df-376b-4b90-9c74-3e4798da5e2e
	I1212 22:25:28.677310   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:28.677337   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:28.677349   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:28.677486   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:28.677829   96656 node_ready.go:49] node "multinode-054207" has status "Ready":"True"
	I1212 22:25:28.677848   96656 node_ready.go:38] duration metric: took 5.686010281s waiting for node "multinode-054207" to be "Ready" ...
	I1212 22:25:28.677858   96656 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:25:28.677923   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:25:28.677933   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:28.677940   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:28.677945   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:28.682903   96656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:25:28.682927   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:28.682940   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:28.682946   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:28.682951   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:28 GMT
	I1212 22:25:28.682956   96656 round_trippers.go:580]     Audit-Id: 273cb95f-fb43-4cc2-97f3-6bf2503b7cf0
	I1212 22:25:28.682961   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:28.682970   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:28.683851   96656 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"427","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53020 chars]
	I1212 22:25:28.687278   96656 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:28.687367   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:25:28.687377   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:28.687387   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:28.687393   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:28.692397   96656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:25:28.692421   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:28.692429   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:28.692434   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:28 GMT
	I1212 22:25:28.692439   96656 round_trippers.go:580]     Audit-Id: 26c8ca3c-cc3e-4c86-b337-0549e632f4d6
	I1212 22:25:28.692444   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:28.692449   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:28.692454   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:28.692576   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"427","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1212 22:25:28.693012   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:28.693028   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:28.693035   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:28.693040   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:28.699110   96656 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 22:25:28.699140   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:28.699150   96656 round_trippers.go:580]     Audit-Id: 62df61e3-ff6e-437a-a875-a2bf8efb13f1
	I1212 22:25:28.699158   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:28.699166   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:28.699174   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:28.699182   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:28.699190   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:28 GMT
	I1212 22:25:28.699378   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:28.699851   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:25:28.699871   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:28.699881   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:28.699890   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:28.703049   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:28.703066   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:28.703073   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:28 GMT
	I1212 22:25:28.703078   96656 round_trippers.go:580]     Audit-Id: b7312475-22ba-4008-becf-f0dba75a553c
	I1212 22:25:28.703084   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:28.703089   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:28.703094   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:28.703099   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:28.703430   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"427","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1212 22:25:28.703832   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:28.703845   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:28.703852   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:28.703870   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:28.711277   96656 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 22:25:28.711300   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:28.711310   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:28.711317   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:28 GMT
	I1212 22:25:28.711325   96656 round_trippers.go:580]     Audit-Id: 87df856a-f4dc-4340-9be0-bf907730b843
	I1212 22:25:28.711334   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:28.711343   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:28.711353   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:28.711548   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:29.212682   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:25:29.212710   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:29.212719   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:29.212725   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:29.215984   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:29.216007   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:29.216015   96656 round_trippers.go:580]     Audit-Id: 088c0e32-850e-4343-ae1f-649af72ee13f
	I1212 22:25:29.216020   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:29.216025   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:29.216030   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:29.216035   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:29.216040   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:29 GMT
	I1212 22:25:29.216244   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"427","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1212 22:25:29.216679   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:29.216691   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:29.216699   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:29.216705   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:29.221110   96656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:25:29.221131   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:29.221137   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:29.221143   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:29 GMT
	I1212 22:25:29.221148   96656 round_trippers.go:580]     Audit-Id: 3b0323e6-2ab0-4ede-a236-ba6a0fd2d016
	I1212 22:25:29.221153   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:29.221158   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:29.221164   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:29.221309   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:29.712063   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:25:29.712111   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:29.712125   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:29.712135   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:29.723507   96656 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1212 22:25:29.723536   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:29.723547   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:29.723554   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:29.723562   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:29.723568   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:29.723575   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:29 GMT
	I1212 22:25:29.723582   96656 round_trippers.go:580]     Audit-Id: 59de56c7-3439-4778-a3e8-e714c673a534
	I1212 22:25:29.723830   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"427","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1212 22:25:29.724384   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:29.724401   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:29.724412   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:29.724421   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:29.733976   96656 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 22:25:29.734005   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:29.734018   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:29.734026   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:29.734035   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:29.734043   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:29 GMT
	I1212 22:25:29.734050   96656 round_trippers.go:580]     Audit-Id: 38874b4d-3521-4e49-b451-be7ed0e62329
	I1212 22:25:29.734057   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:29.734277   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:30.212018   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:25:30.212047   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:30.212061   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:30.212070   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:30.215371   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:30.215393   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:30.215419   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:30.215428   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:30.215436   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:30 GMT
	I1212 22:25:30.215448   96656 round_trippers.go:580]     Audit-Id: 02643832-7004-4057-8f81-99e51a72587e
	I1212 22:25:30.215455   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:30.215462   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:30.215682   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"427","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1212 22:25:30.216254   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:30.216275   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:30.216285   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:30.216294   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:30.218725   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:30.218743   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:30.218753   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:30.218760   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:30.218769   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:30 GMT
	I1212 22:25:30.218781   96656 round_trippers.go:580]     Audit-Id: 7a2bb2b3-dd79-4410-8427-ab1c500254d8
	I1212 22:25:30.218793   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:30.218805   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:30.219005   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:30.713036   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:25:30.713060   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:30.713069   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:30.713081   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:30.717542   96656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:25:30.717568   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:30.717578   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:30 GMT
	I1212 22:25:30.717585   96656 round_trippers.go:580]     Audit-Id: 185fea92-2351-4c9b-aecb-cb4b7c43059c
	I1212 22:25:30.717593   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:30.717601   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:30.717610   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:30.717622   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:30.717790   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"427","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1212 22:25:30.718293   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:30.718311   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:30.718319   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:30.718325   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:30.721093   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:30.721117   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:30.721127   96656 round_trippers.go:580]     Audit-Id: 80540e69-eb0a-4f79-bede-a63d6d4ec135
	I1212 22:25:30.721134   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:30.721142   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:30.721149   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:30.721161   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:30.721172   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:30 GMT
	I1212 22:25:30.721368   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:30.721782   96656 pod_ready.go:102] pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace has status "Ready":"False"
	I1212 22:25:31.212076   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:25:31.212109   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.212121   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.212130   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.215089   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:31.215114   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.215123   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.215131   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.215140   96656 round_trippers.go:580]     Audit-Id: 2b241a0c-6576-46b1-87f7-e52f3d5c6cbc
	I1212 22:25:31.215148   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.215157   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.215166   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.215389   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"445","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1212 22:25:31.215966   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:31.215982   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.215990   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.215997   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.219798   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:31.219822   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.219832   96656 round_trippers.go:580]     Audit-Id: 56a73c33-170e-493c-afe0-cbdc88387336
	I1212 22:25:31.219840   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.219848   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.219856   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.219864   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.219873   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.220371   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:31.220790   96656 pod_ready.go:92] pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace has status "Ready":"True"
	I1212 22:25:31.220810   96656 pod_ready.go:81] duration metric: took 2.533504456s waiting for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.220819   96656 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.220879   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:25:31.220887   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.220894   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.220900   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.222793   96656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:25:31.222811   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.222820   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.222828   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.222836   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.222845   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.222858   96656 round_trippers.go:580]     Audit-Id: 58a13d77-0d1e-41c5-9fd5-f8f418bacc47
	I1212 22:25:31.222870   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.223058   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"439","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1212 22:25:31.223475   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:31.223491   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.223498   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.223504   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.225341   96656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:25:31.225355   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.225364   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.225371   96656 round_trippers.go:580]     Audit-Id: 549f7ad7-dc08-4e72-b7d1-0a3995eddf31
	I1212 22:25:31.225378   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.225386   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.225404   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.225414   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.225738   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:31.226004   96656 pod_ready.go:92] pod "etcd-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:25:31.226020   96656 pod_ready.go:81] duration metric: took 5.195904ms waiting for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.226032   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.226075   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-054207
	I1212 22:25:31.226102   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.226109   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.226115   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.228126   96656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:25:31.228143   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.228151   96656 round_trippers.go:580]     Audit-Id: 5590df3d-7177-487d-9c00-df8b629b18fa
	I1212 22:25:31.228160   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.228166   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.228174   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.228182   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.228195   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.228429   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-054207","namespace":"kube-system","uid":"70bc63a6-e544-401c-90ae-7473ce8343da","resourceVersion":"441","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.172:8443","kubernetes.io/config.hash":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.mirror":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.seen":"2023-12-12T22:25:10.498243509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1212 22:25:31.228802   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:31.228823   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.228830   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.228839   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.231226   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:31.231249   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.231259   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.231266   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.231275   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.231287   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.231296   96656 round_trippers.go:580]     Audit-Id: 2b41bf55-90b9-4b06-bead-fd64e86e15ae
	I1212 22:25:31.231309   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.232095   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:31.232369   96656 pod_ready.go:92] pod "kube-apiserver-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:25:31.232385   96656 pod_ready.go:81] duration metric: took 6.345972ms waiting for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.232397   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.232446   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-054207
	I1212 22:25:31.232462   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.232473   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.232486   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.235004   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:31.235018   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.235024   96656 round_trippers.go:580]     Audit-Id: 402d00e4-23b2-4b47-99e8-a53e4844ab52
	I1212 22:25:31.235029   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.235034   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.235039   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.235044   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.235049   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.235894   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-054207","namespace":"kube-system","uid":"9040c58b-7f77-4355-880f-991c010720f7","resourceVersion":"374","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.mirror":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.seen":"2023-12-12T22:25:10.498244800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1212 22:25:31.236237   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:31.236248   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.236254   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.236261   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.238168   96656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:25:31.238190   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.238202   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.238210   96656 round_trippers.go:580]     Audit-Id: 42a4eb35-6842-4455-ada3-f9e4697303f5
	I1212 22:25:31.238218   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.238230   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.238242   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.238257   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.238355   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:31.238638   96656 pod_ready.go:92] pod "kube-controller-manager-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:25:31.238653   96656 pod_ready.go:81] duration metric: took 6.247233ms waiting for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.238665   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.274982   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:25:31.275006   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.275015   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.275021   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.277710   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:31.277737   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.277747   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.277754   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.277764   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.277770   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.277778   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.277784   96656 round_trippers.go:580]     Audit-Id: 7ce2d040-778d-445d-b04f-493f42cb66bc
	I1212 22:25:31.278017   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rnx8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e8875d71-d50e-44f1-92c1-db1858b4b3bb","resourceVersion":"412","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1212 22:25:31.474907   96656 request.go:629] Waited for 196.358931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:31.474965   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:31.474970   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.475023   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.475036   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.478194   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:31.478220   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.478231   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.478240   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.478247   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.478256   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.478264   96656 round_trippers.go:580]     Audit-Id: 98a9765a-1dea-4198-8fee-bc606bf4cd1d
	I1212 22:25:31.478273   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.478398   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:31.478818   96656 pod_ready.go:92] pod "kube-proxy-rnx8m" in "kube-system" namespace has status "Ready":"True"
	I1212 22:25:31.478839   96656 pod_ready.go:81] duration metric: took 240.167588ms waiting for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.478852   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.675337   96656 request.go:629] Waited for 196.402742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:25:31.675431   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:25:31.675443   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.675457   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.675471   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.678053   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:31.678079   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.678086   96656 round_trippers.go:580]     Audit-Id: e250b93c-459b-485e-a14e-cf307dad8355
	I1212 22:25:31.678092   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.678097   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.678102   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.678109   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.678115   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.678278   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-054207","namespace":"kube-system","uid":"79f6cbd9-988a-4dc2-a910-15abd7598b9c","resourceVersion":"440","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.mirror":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.seen":"2023-12-12T22:25:01.374250221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1212 22:25:31.875085   96656 request.go:629] Waited for 196.401494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:31.875160   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:25:31.875166   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.875192   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.875199   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.877878   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:31.877905   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.877916   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.877924   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.877933   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.877942   96656 round_trippers.go:580]     Audit-Id: cd35cf66-2d36-4d8b-9f4f-96365222f67f
	I1212 22:25:31.877950   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.877959   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.878070   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:25:31.878485   96656 pod_ready.go:92] pod "kube-scheduler-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:25:31.878507   96656 pod_ready.go:81] duration metric: took 399.644866ms waiting for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:25:31.878522   96656 pod_ready.go:38] duration metric: took 3.200644967s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:25:31.878545   96656 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:25:31.878603   96656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:25:31.893940   96656 command_runner.go:130] > 1129
	I1212 22:25:31.894016   96656 api_server.go:72] duration metric: took 8.96005515s to wait for apiserver process to appear ...
	I1212 22:25:31.894030   96656 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:25:31.894047   96656 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1212 22:25:31.898946   96656 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1212 22:25:31.899053   96656 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I1212 22:25:31.899064   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:31.899072   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:31.899077   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:31.900279   96656 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:25:31.900294   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:31.900300   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:31.900307   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:31.900316   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:31.900329   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:31.900338   96656 round_trippers.go:580]     Content-Length: 264
	I1212 22:25:31.900346   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:31 GMT
	I1212 22:25:31.900352   96656 round_trippers.go:580]     Audit-Id: 6915e041-98df-45ae-a5c5-8ea3b83552e4
	I1212 22:25:31.900370   96656 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 22:25:31.900446   96656 api_server.go:141] control plane version: v1.28.4
	I1212 22:25:31.900461   96656 api_server.go:131] duration metric: took 6.426635ms to wait for apiserver health ...
	I1212 22:25:31.900469   96656 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:25:32.074908   96656 request.go:629] Waited for 174.365207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:25:32.074999   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:25:32.075005   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:32.075013   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:32.075019   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:32.078946   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:32.078972   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:32.078979   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:32.078985   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:32 GMT
	I1212 22:25:32.078990   96656 round_trippers.go:580]     Audit-Id: d348f6ec-2d73-4b9d-a84d-f8c14b0b338f
	I1212 22:25:32.078996   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:32.079001   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:32.079006   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:32.080102   96656 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"445","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1212 22:25:32.081820   96656 system_pods.go:59] 8 kube-system pods found
	I1212 22:25:32.081851   96656 system_pods.go:61] "coredns-5dd5756b68-rj4p4" [8bd5cacb-68c8-41e5-a91e-07e6a9739897] Running
	I1212 22:25:32.081856   96656 system_pods.go:61] "etcd-multinode-054207" [2c328cec-c2e2-49d1-85af-66899f444c90] Running
	I1212 22:25:32.081860   96656 system_pods.go:61] "kindnet-nj2sh" [947b4acb-082a-436b-b68f-d253f391ee24] Running
	I1212 22:25:32.081865   96656 system_pods.go:61] "kube-apiserver-multinode-054207" [70bc63a6-e544-401c-90ae-7473ce8343da] Running
	I1212 22:25:32.081869   96656 system_pods.go:61] "kube-controller-manager-multinode-054207" [9040c58b-7f77-4355-880f-991c010720f7] Running
	I1212 22:25:32.081876   96656 system_pods.go:61] "kube-proxy-rnx8m" [e8875d71-d50e-44f1-92c1-db1858b4b3bb] Running
	I1212 22:25:32.081880   96656 system_pods.go:61] "kube-scheduler-multinode-054207" [79f6cbd9-988a-4dc2-a910-15abd7598b9c] Running
	I1212 22:25:32.081884   96656 system_pods.go:61] "storage-provisioner" [40d577b4-8d36-4f55-946d-92755b1d6998] Running
	I1212 22:25:32.081889   96656 system_pods.go:74] duration metric: took 181.415949ms to wait for pod list to return data ...
	I1212 22:25:32.081903   96656 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:25:32.274327   96656 request.go:629] Waited for 192.32966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I1212 22:25:32.274405   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I1212 22:25:32.274410   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:32.274419   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:32.274425   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:32.278373   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:32.278404   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:32.278412   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:32.278418   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:32.278423   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:32.278428   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:32.278433   96656 round_trippers.go:580]     Content-Length: 261
	I1212 22:25:32.278440   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:32 GMT
	I1212 22:25:32.278444   96656 round_trippers.go:580]     Audit-Id: 318ee9f2-1614-4f6f-970f-055eef02b203
	I1212 22:25:32.278468   96656 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"992432e5-3d6d-43a7-bea9-b64208472919","resourceVersion":"339","creationTimestamp":"2023-12-12T22:25:22Z"}}]}
	I1212 22:25:32.278703   96656 default_sa.go:45] found service account: "default"
	I1212 22:25:32.278727   96656 default_sa.go:55] duration metric: took 196.814462ms for default service account to be created ...
	I1212 22:25:32.278744   96656 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:25:32.475233   96656 request.go:629] Waited for 196.385775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:25:32.475300   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:25:32.475305   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:32.475314   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:32.475320   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:32.479046   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:25:32.479075   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:32.479103   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:32.479113   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:32.479123   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:32.479132   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:32.479144   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:32 GMT
	I1212 22:25:32.479152   96656 round_trippers.go:580]     Audit-Id: 84b4fab6-a1aa-49e9-9dc2-329645e2097d
	I1212 22:25:32.480272   96656 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"445","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1212 22:25:32.481869   96656 system_pods.go:86] 8 kube-system pods found
	I1212 22:25:32.481892   96656 system_pods.go:89] "coredns-5dd5756b68-rj4p4" [8bd5cacb-68c8-41e5-a91e-07e6a9739897] Running
	I1212 22:25:32.481897   96656 system_pods.go:89] "etcd-multinode-054207" [2c328cec-c2e2-49d1-85af-66899f444c90] Running
	I1212 22:25:32.481901   96656 system_pods.go:89] "kindnet-nj2sh" [947b4acb-082a-436b-b68f-d253f391ee24] Running
	I1212 22:25:32.481905   96656 system_pods.go:89] "kube-apiserver-multinode-054207" [70bc63a6-e544-401c-90ae-7473ce8343da] Running
	I1212 22:25:32.481918   96656 system_pods.go:89] "kube-controller-manager-multinode-054207" [9040c58b-7f77-4355-880f-991c010720f7] Running
	I1212 22:25:32.481922   96656 system_pods.go:89] "kube-proxy-rnx8m" [e8875d71-d50e-44f1-92c1-db1858b4b3bb] Running
	I1212 22:25:32.481925   96656 system_pods.go:89] "kube-scheduler-multinode-054207" [79f6cbd9-988a-4dc2-a910-15abd7598b9c] Running
	I1212 22:25:32.481930   96656 system_pods.go:89] "storage-provisioner" [40d577b4-8d36-4f55-946d-92755b1d6998] Running
	I1212 22:25:32.481937   96656 system_pods.go:126] duration metric: took 203.18298ms to wait for k8s-apps to be running ...
	I1212 22:25:32.481947   96656 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:25:32.481993   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:25:32.495527   96656 system_svc.go:56] duration metric: took 13.571813ms WaitForService to wait for kubelet.
	I1212 22:25:32.495553   96656 kubeadm.go:581] duration metric: took 9.561591247s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:25:32.495578   96656 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:25:32.675103   96656 request.go:629] Waited for 179.391242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I1212 22:25:32.675175   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I1212 22:25:32.675186   96656 round_trippers.go:469] Request Headers:
	I1212 22:25:32.675200   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:25:32.675213   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:25:32.677860   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:25:32.677893   96656 round_trippers.go:577] Response Headers:
	I1212 22:25:32.677904   96656 round_trippers.go:580]     Audit-Id: 824dc540-e0fd-4932-8648-ed730d433605
	I1212 22:25:32.677911   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:25:32.677918   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:25:32.677926   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:25:32.677934   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:25:32.677941   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:25:32 GMT
	I1212 22:25:32.678267   96656 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I1212 22:25:32.678747   96656 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:25:32.678775   96656 node_conditions.go:123] node cpu capacity is 2
	I1212 22:25:32.678792   96656 node_conditions.go:105] duration metric: took 183.204971ms to run NodePressure ...
	I1212 22:25:32.678810   96656 start.go:228] waiting for startup goroutines ...
	I1212 22:25:32.678833   96656 start.go:233] waiting for cluster config update ...
	I1212 22:25:32.678911   96656 start.go:242] writing updated cluster config ...
	I1212 22:25:32.681479   96656 out.go:177] 
	I1212 22:25:32.683228   96656 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:25:32.683338   96656 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:25:32.687171   96656 out.go:177] * Starting worker node multinode-054207-m02 in cluster multinode-054207
	I1212 22:25:32.688693   96656 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:25:32.688723   96656 cache.go:56] Caching tarball of preloaded images
	I1212 22:25:32.688815   96656 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:25:32.688827   96656 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:25:32.688900   96656 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:25:32.689064   96656 start.go:365] acquiring machines lock for multinode-054207-m02: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:25:32.689108   96656 start.go:369] acquired machines lock for "multinode-054207-m02" in 22.761µs
	I1212 22:25:32.689125   96656 start.go:93] Provisioning new machine with config: &{Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:25:32.689209   96656 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1212 22:25:32.690974   96656 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 22:25:32.691100   96656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:25:32.691132   96656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:25:32.705486   96656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41377
	I1212 22:25:32.705909   96656 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:25:32.706325   96656 main.go:141] libmachine: Using API Version  1
	I1212 22:25:32.706348   96656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:25:32.706651   96656 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:25:32.706842   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetMachineName
	I1212 22:25:32.707003   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:25:32.707148   96656 start.go:159] libmachine.API.Create for "multinode-054207" (driver="kvm2")
	I1212 22:25:32.707169   96656 client.go:168] LocalClient.Create starting
	I1212 22:25:32.707194   96656 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem
	I1212 22:25:32.707226   96656 main.go:141] libmachine: Decoding PEM data...
	I1212 22:25:32.707256   96656 main.go:141] libmachine: Parsing certificate...
	I1212 22:25:32.707308   96656 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem
	I1212 22:25:32.707326   96656 main.go:141] libmachine: Decoding PEM data...
	I1212 22:25:32.707337   96656 main.go:141] libmachine: Parsing certificate...
	I1212 22:25:32.707359   96656 main.go:141] libmachine: Running pre-create checks...
	I1212 22:25:32.707368   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .PreCreateCheck
	I1212 22:25:32.707528   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetConfigRaw
	I1212 22:25:32.707898   96656 main.go:141] libmachine: Creating machine...
	I1212 22:25:32.707912   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .Create
	I1212 22:25:32.708031   96656 main.go:141] libmachine: (multinode-054207-m02) Creating KVM machine...
	I1212 22:25:32.709295   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found existing default KVM network
	I1212 22:25:32.709495   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found existing private KVM network mk-multinode-054207
	I1212 22:25:32.709629   96656 main.go:141] libmachine: (multinode-054207-m02) Setting up store path in /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02 ...
	I1212 22:25:32.709655   96656 main.go:141] libmachine: (multinode-054207-m02) Building disk image from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 22:25:32.709736   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:32.709615   97017 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:25:32.709827   96656 main.go:141] libmachine: (multinode-054207-m02) Downloading /home/jenkins/minikube-integration/17761-76611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 22:25:32.928340   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:32.928190   97017 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa...
	I1212 22:25:33.062919   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:33.062778   97017 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/multinode-054207-m02.rawdisk...
	I1212 22:25:33.062950   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Writing magic tar header
	I1212 22:25:33.062973   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Writing SSH key tar header
	I1212 22:25:33.062986   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:33.062889   97017 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02 ...
	I1212 22:25:33.063002   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02
	I1212 22:25:33.063048   96656 main.go:141] libmachine: (multinode-054207-m02) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02 (perms=drwx------)
	I1212 22:25:33.063071   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines
	I1212 22:25:33.063080   96656 main.go:141] libmachine: (multinode-054207-m02) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines (perms=drwxr-xr-x)
	I1212 22:25:33.063095   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:25:33.063104   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611
	I1212 22:25:33.063114   96656 main.go:141] libmachine: (multinode-054207-m02) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube (perms=drwxr-xr-x)
	I1212 22:25:33.063133   96656 main.go:141] libmachine: (multinode-054207-m02) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611 (perms=drwxrwxr-x)
	I1212 22:25:33.063146   96656 main.go:141] libmachine: (multinode-054207-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 22:25:33.063157   96656 main.go:141] libmachine: (multinode-054207-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 22:25:33.063166   96656 main.go:141] libmachine: (multinode-054207-m02) Creating domain...
	I1212 22:25:33.063176   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 22:25:33.063186   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Checking permissions on dir: /home/jenkins
	I1212 22:25:33.063195   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Checking permissions on dir: /home
	I1212 22:25:33.063206   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Skipping /home - not owner
	I1212 22:25:33.064250   96656 main.go:141] libmachine: (multinode-054207-m02) define libvirt domain using xml: 
	I1212 22:25:33.064272   96656 main.go:141] libmachine: (multinode-054207-m02) <domain type='kvm'>
	I1212 22:25:33.064284   96656 main.go:141] libmachine: (multinode-054207-m02)   <name>multinode-054207-m02</name>
	I1212 22:25:33.064290   96656 main.go:141] libmachine: (multinode-054207-m02)   <memory unit='MiB'>2200</memory>
	I1212 22:25:33.064296   96656 main.go:141] libmachine: (multinode-054207-m02)   <vcpu>2</vcpu>
	I1212 22:25:33.064301   96656 main.go:141] libmachine: (multinode-054207-m02)   <features>
	I1212 22:25:33.064307   96656 main.go:141] libmachine: (multinode-054207-m02)     <acpi/>
	I1212 22:25:33.064312   96656 main.go:141] libmachine: (multinode-054207-m02)     <apic/>
	I1212 22:25:33.064322   96656 main.go:141] libmachine: (multinode-054207-m02)     <pae/>
	I1212 22:25:33.064331   96656 main.go:141] libmachine: (multinode-054207-m02)     
	I1212 22:25:33.064338   96656 main.go:141] libmachine: (multinode-054207-m02)   </features>
	I1212 22:25:33.064348   96656 main.go:141] libmachine: (multinode-054207-m02)   <cpu mode='host-passthrough'>
	I1212 22:25:33.064354   96656 main.go:141] libmachine: (multinode-054207-m02)   
	I1212 22:25:33.064359   96656 main.go:141] libmachine: (multinode-054207-m02)   </cpu>
	I1212 22:25:33.064365   96656 main.go:141] libmachine: (multinode-054207-m02)   <os>
	I1212 22:25:33.064372   96656 main.go:141] libmachine: (multinode-054207-m02)     <type>hvm</type>
	I1212 22:25:33.064406   96656 main.go:141] libmachine: (multinode-054207-m02)     <boot dev='cdrom'/>
	I1212 22:25:33.064430   96656 main.go:141] libmachine: (multinode-054207-m02)     <boot dev='hd'/>
	I1212 22:25:33.064439   96656 main.go:141] libmachine: (multinode-054207-m02)     <bootmenu enable='no'/>
	I1212 22:25:33.064447   96656 main.go:141] libmachine: (multinode-054207-m02)   </os>
	I1212 22:25:33.064454   96656 main.go:141] libmachine: (multinode-054207-m02)   <devices>
	I1212 22:25:33.064460   96656 main.go:141] libmachine: (multinode-054207-m02)     <disk type='file' device='cdrom'>
	I1212 22:25:33.064473   96656 main.go:141] libmachine: (multinode-054207-m02)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/boot2docker.iso'/>
	I1212 22:25:33.064482   96656 main.go:141] libmachine: (multinode-054207-m02)       <target dev='hdc' bus='scsi'/>
	I1212 22:25:33.064488   96656 main.go:141] libmachine: (multinode-054207-m02)       <readonly/>
	I1212 22:25:33.064496   96656 main.go:141] libmachine: (multinode-054207-m02)     </disk>
	I1212 22:25:33.064504   96656 main.go:141] libmachine: (multinode-054207-m02)     <disk type='file' device='disk'>
	I1212 22:25:33.064528   96656 main.go:141] libmachine: (multinode-054207-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 22:25:33.064551   96656 main.go:141] libmachine: (multinode-054207-m02)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/multinode-054207-m02.rawdisk'/>
	I1212 22:25:33.064562   96656 main.go:141] libmachine: (multinode-054207-m02)       <target dev='hda' bus='virtio'/>
	I1212 22:25:33.064568   96656 main.go:141] libmachine: (multinode-054207-m02)     </disk>
	I1212 22:25:33.064577   96656 main.go:141] libmachine: (multinode-054207-m02)     <interface type='network'>
	I1212 22:25:33.064584   96656 main.go:141] libmachine: (multinode-054207-m02)       <source network='mk-multinode-054207'/>
	I1212 22:25:33.064595   96656 main.go:141] libmachine: (multinode-054207-m02)       <model type='virtio'/>
	I1212 22:25:33.064604   96656 main.go:141] libmachine: (multinode-054207-m02)     </interface>
	I1212 22:25:33.064617   96656 main.go:141] libmachine: (multinode-054207-m02)     <interface type='network'>
	I1212 22:25:33.064624   96656 main.go:141] libmachine: (multinode-054207-m02)       <source network='default'/>
	I1212 22:25:33.064632   96656 main.go:141] libmachine: (multinode-054207-m02)       <model type='virtio'/>
	I1212 22:25:33.064639   96656 main.go:141] libmachine: (multinode-054207-m02)     </interface>
	I1212 22:25:33.064647   96656 main.go:141] libmachine: (multinode-054207-m02)     <serial type='pty'>
	I1212 22:25:33.064653   96656 main.go:141] libmachine: (multinode-054207-m02)       <target port='0'/>
	I1212 22:25:33.064664   96656 main.go:141] libmachine: (multinode-054207-m02)     </serial>
	I1212 22:25:33.064671   96656 main.go:141] libmachine: (multinode-054207-m02)     <console type='pty'>
	I1212 22:25:33.064680   96656 main.go:141] libmachine: (multinode-054207-m02)       <target type='serial' port='0'/>
	I1212 22:25:33.064686   96656 main.go:141] libmachine: (multinode-054207-m02)     </console>
	I1212 22:25:33.064697   96656 main.go:141] libmachine: (multinode-054207-m02)     <rng model='virtio'>
	I1212 22:25:33.064708   96656 main.go:141] libmachine: (multinode-054207-m02)       <backend model='random'>/dev/random</backend>
	I1212 22:25:33.064716   96656 main.go:141] libmachine: (multinode-054207-m02)     </rng>
	I1212 22:25:33.064722   96656 main.go:141] libmachine: (multinode-054207-m02)     
	I1212 22:25:33.064730   96656 main.go:141] libmachine: (multinode-054207-m02)     
	I1212 22:25:33.064743   96656 main.go:141] libmachine: (multinode-054207-m02)   </devices>
	I1212 22:25:33.064756   96656 main.go:141] libmachine: (multinode-054207-m02) </domain>
	I1212 22:25:33.064788   96656 main.go:141] libmachine: (multinode-054207-m02) 
	I1212 22:25:33.071819   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:5d:45:64 in network default
	I1212 22:25:33.072346   96656 main.go:141] libmachine: (multinode-054207-m02) Ensuring networks are active...
	I1212 22:25:33.072380   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:33.072999   96656 main.go:141] libmachine: (multinode-054207-m02) Ensuring network default is active
	I1212 22:25:33.073260   96656 main.go:141] libmachine: (multinode-054207-m02) Ensuring network mk-multinode-054207 is active
	I1212 22:25:33.073566   96656 main.go:141] libmachine: (multinode-054207-m02) Getting domain xml...
	I1212 22:25:33.074214   96656 main.go:141] libmachine: (multinode-054207-m02) Creating domain...
	I1212 22:25:34.320472   96656 main.go:141] libmachine: (multinode-054207-m02) Waiting to get IP...
	I1212 22:25:34.321145   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:34.321460   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:34.321479   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:34.321439   97017 retry.go:31] will retry after 240.177809ms: waiting for machine to come up
	I1212 22:25:34.562825   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:34.563280   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:34.563317   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:34.563212   97017 retry.go:31] will retry after 317.942519ms: waiting for machine to come up
	I1212 22:25:34.882833   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:34.883283   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:34.883308   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:34.883200   97017 retry.go:31] will retry after 376.286256ms: waiting for machine to come up
	I1212 22:25:35.260741   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:35.261228   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:35.261263   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:35.261193   97017 retry.go:31] will retry after 492.834981ms: waiting for machine to come up
	I1212 22:25:35.756043   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:35.756476   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:35.756502   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:35.756418   97017 retry.go:31] will retry after 466.659867ms: waiting for machine to come up
	I1212 22:25:36.225242   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:36.225720   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:36.225748   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:36.225666   97017 retry.go:31] will retry after 891.192529ms: waiting for machine to come up
	I1212 22:25:37.119150   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:37.119628   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:37.119653   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:37.119568   97017 retry.go:31] will retry after 945.89353ms: waiting for machine to come up
	I1212 22:25:38.066546   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:38.066934   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:38.066967   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:38.066896   97017 retry.go:31] will retry after 1.394994991s: waiting for machine to come up
	I1212 22:25:39.463138   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:39.463615   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:39.463646   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:39.463563   97017 retry.go:31] will retry after 1.76789983s: waiting for machine to come up
	I1212 22:25:41.232639   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:41.233001   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:41.233024   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:41.232968   97017 retry.go:31] will retry after 2.195589779s: waiting for machine to come up
	I1212 22:25:43.431198   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:43.431642   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:43.431676   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:43.431586   97017 retry.go:31] will retry after 2.713460164s: waiting for machine to come up
	I1212 22:25:46.148614   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:46.149118   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:46.149147   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:46.149083   97017 retry.go:31] will retry after 2.606617631s: waiting for machine to come up
	I1212 22:25:48.757424   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:48.757837   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:48.757862   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:48.757788   97017 retry.go:31] will retry after 3.447463379s: waiting for machine to come up
	I1212 22:25:52.209376   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:52.209774   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find current IP address of domain multinode-054207-m02 in network mk-multinode-054207
	I1212 22:25:52.209801   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | I1212 22:25:52.209713   97017 retry.go:31] will retry after 3.528724037s: waiting for machine to come up
	I1212 22:25:55.742418   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:55.742785   96656 main.go:141] libmachine: (multinode-054207-m02) Found IP for machine: 192.168.39.15
	I1212 22:25:55.742800   96656 main.go:141] libmachine: (multinode-054207-m02) Reserving static IP address...
	I1212 22:25:55.742810   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has current primary IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:55.743221   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | unable to find host DHCP lease matching {name: "multinode-054207-m02", mac: "52:54:00:db:c3:3d", ip: "192.168.39.15"} in network mk-multinode-054207
	I1212 22:25:55.814751   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Getting to WaitForSSH function...
	I1212 22:25:55.814793   96656 main.go:141] libmachine: (multinode-054207-m02) Reserved static IP address: 192.168.39.15
	I1212 22:25:55.814848   96656 main.go:141] libmachine: (multinode-054207-m02) Waiting for SSH to be available...
	I1212 22:25:55.817876   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:55.818286   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:55.818300   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:55.818571   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Using SSH client type: external
	I1212 22:25:55.818602   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa (-rw-------)
	I1212 22:25:55.818638   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 22:25:55.818654   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | About to run SSH command:
	I1212 22:25:55.818670   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | exit 0
	I1212 22:25:55.915046   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 22:25:55.915341   96656 main.go:141] libmachine: (multinode-054207-m02) KVM machine creation complete!
	I1212 22:25:55.915676   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetConfigRaw
	I1212 22:25:55.916279   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:25:55.916471   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:25:55.916616   96656 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 22:25:55.916634   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetState
	I1212 22:25:55.917893   96656 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 22:25:55.917906   96656 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 22:25:55.917918   96656 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 22:25:55.917929   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:55.920773   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:55.921155   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:55.921185   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:55.921397   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:55.921611   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:55.921802   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:55.921956   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:55.922149   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:25:55.922661   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:25:55.922687   96656 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 22:25:56.054349   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:25:56.054383   96656 main.go:141] libmachine: Detecting the provisioner...
	I1212 22:25:56.054392   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:56.057290   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.057660   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:56.057694   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.057836   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:56.058069   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:56.058253   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:56.058409   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:56.058603   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:25:56.058913   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:25:56.058925   96656 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 22:25:56.188066   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g161fa11-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 22:25:56.188146   96656 main.go:141] libmachine: found compatible host: buildroot
	I1212 22:25:56.188161   96656 main.go:141] libmachine: Provisioning with buildroot...
	I1212 22:25:56.188185   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetMachineName
	I1212 22:25:56.188526   96656 buildroot.go:166] provisioning hostname "multinode-054207-m02"
	I1212 22:25:56.188562   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetMachineName
	I1212 22:25:56.188767   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:56.191378   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.191708   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:56.191742   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.191916   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:56.192109   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:56.192294   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:56.192434   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:56.192585   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:25:56.192939   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:25:56.192954   96656 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-054207-m02 && echo "multinode-054207-m02" | sudo tee /etc/hostname
	I1212 22:25:56.340022   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-054207-m02
	
	I1212 22:25:56.340051   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:56.342620   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.343014   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:56.343044   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.343230   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:56.343439   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:56.343619   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:56.343764   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:56.343937   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:25:56.344250   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:25:56.344267   96656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-054207-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-054207-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-054207-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:25:56.479213   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:25:56.479263   96656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 22:25:56.479284   96656 buildroot.go:174] setting up certificates
	I1212 22:25:56.479296   96656 provision.go:83] configureAuth start
	I1212 22:25:56.479305   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetMachineName
	I1212 22:25:56.479633   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetIP
	I1212 22:25:56.482610   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.482974   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:56.483006   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.483150   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:56.485452   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.485852   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:56.485894   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.486056   96656 provision.go:138] copyHostCerts
	I1212 22:25:56.486083   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:25:56.486114   96656 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 22:25:56.486123   96656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:25:56.486193   96656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 22:25:56.486296   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:25:56.486325   96656 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 22:25:56.486331   96656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:25:56.486368   96656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 22:25:56.486425   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:25:56.486443   96656 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 22:25:56.486447   96656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:25:56.486482   96656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 22:25:56.486545   96656 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.multinode-054207-m02 san=[192.168.39.15 192.168.39.15 localhost 127.0.0.1 minikube multinode-054207-m02]
	I1212 22:25:56.934173   96656 provision.go:172] copyRemoteCerts
	I1212 22:25:56.934237   96656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:25:56.934268   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:56.937017   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.937359   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:56.937396   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:56.937598   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:56.937798   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:56.937969   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:56.938090   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:25:57.032502   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:25:57.032580   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:25:57.059076   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:25:57.059150   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 22:25:57.084653   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:25:57.084727   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:25:57.110269   96656 provision.go:86] duration metric: configureAuth took 630.959999ms
	I1212 22:25:57.110295   96656 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:25:57.110468   96656 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:25:57.110547   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:57.113410   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.113778   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:57.113810   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.113982   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:57.114176   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:57.114369   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:57.114516   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:57.114657   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:25:57.114970   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:25:57.114985   96656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:25:57.448672   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:25:57.448706   96656 main.go:141] libmachine: Checking connection to Docker...
	I1212 22:25:57.448743   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetURL
	I1212 22:25:57.450091   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | Using libvirt version 6000000
	I1212 22:25:57.452223   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.452561   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:57.452591   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.452736   96656 main.go:141] libmachine: Docker is up and running!
	I1212 22:25:57.452754   96656 main.go:141] libmachine: Reticulating splines...
	I1212 22:25:57.452763   96656 client.go:171] LocalClient.Create took 24.745585947s
	I1212 22:25:57.452793   96656 start.go:167] duration metric: libmachine.API.Create for "multinode-054207" took 24.745644871s
	I1212 22:25:57.452807   96656 start.go:300] post-start starting for "multinode-054207-m02" (driver="kvm2")
	I1212 22:25:57.452826   96656 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:25:57.452850   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:25:57.453066   96656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:25:57.453088   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:57.455297   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.455736   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:57.455772   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.455933   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:57.456106   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:57.456238   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:57.456416   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:25:57.551234   96656 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:25:57.555843   96656 command_runner.go:130] > NAME=Buildroot
	I1212 22:25:57.555884   96656 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 22:25:57.555888   96656 command_runner.go:130] > ID=buildroot
	I1212 22:25:57.555894   96656 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 22:25:57.555899   96656 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 22:25:57.556116   96656 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:25:57.556158   96656 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 22:25:57.556230   96656 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 22:25:57.556308   96656 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 22:25:57.556319   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /etc/ssl/certs/838252.pem
	I1212 22:25:57.556411   96656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:25:57.566920   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:25:57.591609   96656 start.go:303] post-start completed in 138.786194ms
	I1212 22:25:57.591666   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetConfigRaw
	I1212 22:25:57.592261   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetIP
	I1212 22:25:57.594859   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.595295   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:57.595327   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.595542   96656 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:25:57.595721   96656 start.go:128] duration metric: createHost completed in 24.906500984s
	I1212 22:25:57.595745   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:57.598121   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.598461   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:57.598495   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.598605   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:57.598771   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:57.598938   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:57.599047   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:57.599205   96656 main.go:141] libmachine: Using SSH client type: native
	I1212 22:25:57.599549   96656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:25:57.599563   96656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:25:57.728716   96656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702419957.711042513
	
	I1212 22:25:57.728749   96656 fix.go:206] guest clock: 1702419957.711042513
	I1212 22:25:57.728757   96656 fix.go:219] Guest: 2023-12-12 22:25:57.711042513 +0000 UTC Remote: 2023-12-12 22:25:57.595733642 +0000 UTC m=+92.465460434 (delta=115.308871ms)
	I1212 22:25:57.728771   96656 fix.go:190] guest clock delta is within tolerance: 115.308871ms
	I1212 22:25:57.728775   96656 start.go:83] releasing machines lock for "multinode-054207-m02", held for 25.0396595s
	I1212 22:25:57.728795   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:25:57.729041   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetIP
	I1212 22:25:57.731553   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.731908   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:57.731961   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.734379   96656 out.go:177] * Found network options:
	I1212 22:25:57.735824   96656 out.go:177]   - NO_PROXY=192.168.39.172
	W1212 22:25:57.737308   96656 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 22:25:57.737362   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:25:57.737894   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:25:57.738094   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:25:57.738184   96656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:25:57.738226   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	W1212 22:25:57.738256   96656 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 22:25:57.738352   96656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:25:57.738380   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:25:57.740808   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.740977   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.741166   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:57.741196   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.741347   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:57.741437   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:57.741478   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:57.741540   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:57.741634   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:25:57.741703   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:57.741793   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:25:57.741849   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:25:57.741931   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:25:57.742049   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:25:57.992451   96656 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 22:25:57.992572   96656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:25:57.998624   96656 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 22:25:57.998694   96656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:25:57.998756   96656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:25:58.013220   96656 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 22:25:58.013476   96656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:25:58.013499   96656 start.go:475] detecting cgroup driver to use...
	I1212 22:25:58.013582   96656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:25:58.031371   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:25:58.043397   96656 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:25:58.043458   96656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:25:58.055469   96656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:25:58.067483   96656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:25:58.172734   96656 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 22:25:58.172819   96656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:25:58.298967   96656 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 22:25:58.299010   96656 docker.go:219] disabling docker service ...
	I1212 22:25:58.299062   96656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:25:58.312995   96656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:25:58.324694   96656 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 22:25:58.325186   96656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:25:58.434007   96656 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 22:25:58.434087   96656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:25:58.447043   96656 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 22:25:58.447456   96656 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 22:25:58.536701   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:25:58.550364   96656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:25:58.569240   96656 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 22:25:58.569654   96656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:25:58.569727   96656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:25:58.581259   96656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:25:58.581352   96656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:25:58.593188   96656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:25:58.605005   96656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:25:58.615689   96656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:25:58.626300   96656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:25:58.635391   96656 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:25:58.635519   96656 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:25:58.635599   96656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 22:25:58.651468   96656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:25:58.661607   96656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:25:58.773099   96656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:25:58.938628   96656 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:25:58.938713   96656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:25:58.943487   96656 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 22:25:58.943513   96656 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 22:25:58.943523   96656 command_runner.go:130] > Device: 16h/22d	Inode: 725         Links: 1
	I1212 22:25:58.943534   96656 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:25:58.943541   96656 command_runner.go:130] > Access: 2023-12-12 22:25:58.906697094 +0000
	I1212 22:25:58.943549   96656 command_runner.go:130] > Modify: 2023-12-12 22:25:58.906697094 +0000
	I1212 22:25:58.943556   96656 command_runner.go:130] > Change: 2023-12-12 22:25:58.906697094 +0000
	I1212 22:25:58.943562   96656 command_runner.go:130] >  Birth: -
	I1212 22:25:58.943594   96656 start.go:543] Will wait 60s for crictl version
	I1212 22:25:58.943663   96656 ssh_runner.go:195] Run: which crictl
	I1212 22:25:58.947047   96656 command_runner.go:130] > /usr/bin/crictl
	I1212 22:25:58.947178   96656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:25:58.984215   96656 command_runner.go:130] > Version:  0.1.0
	I1212 22:25:58.984242   96656 command_runner.go:130] > RuntimeName:  cri-o
	I1212 22:25:58.984249   96656 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 22:25:58.984256   96656 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 22:25:58.985760   96656 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 22:25:58.985859   96656 ssh_runner.go:195] Run: crio --version
	I1212 22:25:59.033666   96656 command_runner.go:130] > crio version 1.24.1
	I1212 22:25:59.033693   96656 command_runner.go:130] > Version:          1.24.1
	I1212 22:25:59.033700   96656 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:25:59.033704   96656 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:25:59.033714   96656 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:25:59.033719   96656 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:25:59.033726   96656 command_runner.go:130] > Compiler:         gc
	I1212 22:25:59.033734   96656 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:25:59.033743   96656 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:25:59.033755   96656 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:25:59.033763   96656 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:25:59.033769   96656 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:25:59.034977   96656 ssh_runner.go:195] Run: crio --version
	I1212 22:25:59.082250   96656 command_runner.go:130] > crio version 1.24.1
	I1212 22:25:59.082275   96656 command_runner.go:130] > Version:          1.24.1
	I1212 22:25:59.082282   96656 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:25:59.082287   96656 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:25:59.082293   96656 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:25:59.082299   96656 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:25:59.082303   96656 command_runner.go:130] > Compiler:         gc
	I1212 22:25:59.082307   96656 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:25:59.082312   96656 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:25:59.082319   96656 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:25:59.082328   96656 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:25:59.082342   96656 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:25:59.087054   96656 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 22:25:59.088789   96656 out.go:177]   - env NO_PROXY=192.168.39.172
	I1212 22:25:59.090427   96656 main.go:141] libmachine: (multinode-054207-m02) Calling .GetIP
	I1212 22:25:59.093328   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:59.093658   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:25:59.093682   96656 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:25:59.093893   96656 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 22:25:59.098233   96656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:25:59.111162   96656 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207 for IP: 192.168.39.15
	I1212 22:25:59.111214   96656 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:25:59.111414   96656 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 22:25:59.111460   96656 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 22:25:59.111475   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:25:59.111489   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:25:59.111501   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:25:59.111517   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:25:59.111574   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 22:25:59.111602   96656 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 22:25:59.111617   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 22:25:59.111645   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:25:59.111672   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:25:59.111716   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 22:25:59.111759   96656 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:25:59.111788   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem -> /usr/share/ca-certificates/83825.pem
	I1212 22:25:59.111804   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /usr/share/ca-certificates/838252.pem
	I1212 22:25:59.111816   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:25:59.112249   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:25:59.136532   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 22:25:59.160457   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:25:59.184149   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 22:25:59.211298   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 22:25:59.238227   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 22:25:59.264489   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:25:59.291088   96656 ssh_runner.go:195] Run: openssl version
	I1212 22:25:59.296929   96656 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 22:25:59.297013   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 22:25:59.307269   96656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 22:25:59.311684   96656 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:25:59.311930   96656 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:25:59.311998   96656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 22:25:59.317972   96656 command_runner.go:130] > 51391683
	I1212 22:25:59.318038   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 22:25:59.328256   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 22:25:59.338525   96656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 22:25:59.343035   96656 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:25:59.343104   96656 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:25:59.343175   96656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 22:25:59.348549   96656 command_runner.go:130] > 3ec20f2e
	I1212 22:25:59.348848   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:25:59.358631   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:25:59.368411   96656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:25:59.372714   96656 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:25:59.372920   96656 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:25:59.372967   96656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:25:59.378134   96656 command_runner.go:130] > b5213941
	I1212 22:25:59.378525   96656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:25:59.388460   96656 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:25:59.392717   96656 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:25:59.392768   96656 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:25:59.392868   96656 ssh_runner.go:195] Run: crio config
	I1212 22:25:59.449893   96656 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 22:25:59.449924   96656 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 22:25:59.449934   96656 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 22:25:59.449939   96656 command_runner.go:130] > #
	I1212 22:25:59.449953   96656 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 22:25:59.449965   96656 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 22:25:59.449974   96656 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 22:25:59.449996   96656 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 22:25:59.450007   96656 command_runner.go:130] > # reload'.
	I1212 22:25:59.450018   96656 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 22:25:59.450032   96656 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 22:25:59.450047   96656 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 22:25:59.450064   96656 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 22:25:59.450073   96656 command_runner.go:130] > [crio]
	I1212 22:25:59.450084   96656 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 22:25:59.450096   96656 command_runner.go:130] > # containers images, in this directory.
	I1212 22:25:59.450130   96656 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 22:25:59.450146   96656 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 22:25:59.450415   96656 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 22:25:59.450434   96656 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 22:25:59.450440   96656 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 22:25:59.450685   96656 command_runner.go:130] > storage_driver = "overlay"
	I1212 22:25:59.450696   96656 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 22:25:59.450702   96656 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 22:25:59.450706   96656 command_runner.go:130] > storage_option = [
	I1212 22:25:59.450967   96656 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 22:25:59.451019   96656 command_runner.go:130] > ]
	I1212 22:25:59.451038   96656 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 22:25:59.451049   96656 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 22:25:59.451585   96656 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 22:25:59.451595   96656 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 22:25:59.451601   96656 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 22:25:59.451606   96656 command_runner.go:130] > # always happen on a node reboot
	I1212 22:25:59.452085   96656 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 22:25:59.452095   96656 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 22:25:59.452108   96656 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 22:25:59.452129   96656 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 22:25:59.452634   96656 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 22:25:59.452645   96656 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 22:25:59.452653   96656 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 22:25:59.453084   96656 command_runner.go:130] > # internal_wipe = true
	I1212 22:25:59.453093   96656 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 22:25:59.453102   96656 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 22:25:59.453108   96656 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 22:25:59.453707   96656 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 22:25:59.453718   96656 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 22:25:59.453722   96656 command_runner.go:130] > [crio.api]
	I1212 22:25:59.453727   96656 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 22:25:59.454113   96656 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 22:25:59.454128   96656 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 22:25:59.454537   96656 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 22:25:59.454556   96656 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 22:25:59.454565   96656 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 22:25:59.454581   96656 command_runner.go:130] > # stream_port = "0"
	I1212 22:25:59.454591   96656 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 22:25:59.454718   96656 command_runner.go:130] > # stream_enable_tls = false
	I1212 22:25:59.454735   96656 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 22:25:59.454743   96656 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 22:25:59.454752   96656 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 22:25:59.454774   96656 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 22:25:59.454783   96656 command_runner.go:130] > # minutes.
	I1212 22:25:59.454792   96656 command_runner.go:130] > # stream_tls_cert = ""
	I1212 22:25:59.454805   96656 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 22:25:59.454818   96656 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 22:25:59.454844   96656 command_runner.go:130] > # stream_tls_key = ""
	I1212 22:25:59.454858   96656 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 22:25:59.454867   96656 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 22:25:59.454879   96656 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 22:25:59.454889   96656 command_runner.go:130] > # stream_tls_ca = ""
	I1212 22:25:59.454902   96656 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:25:59.454910   96656 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 22:25:59.454924   96656 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:25:59.454935   96656 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 22:25:59.454962   96656 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 22:25:59.454975   96656 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 22:25:59.454982   96656 command_runner.go:130] > [crio.runtime]
	I1212 22:25:59.454995   96656 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 22:25:59.455006   96656 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 22:25:59.455014   96656 command_runner.go:130] > # "nofile=1024:2048"
	I1212 22:25:59.455027   96656 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 22:25:59.455036   96656 command_runner.go:130] > # default_ulimits = [
	I1212 22:25:59.455069   96656 command_runner.go:130] > # ]
	I1212 22:25:59.455083   96656 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 22:25:59.455091   96656 command_runner.go:130] > # no_pivot = false
	I1212 22:25:59.455104   96656 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 22:25:59.455117   96656 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 22:25:59.455128   96656 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 22:25:59.455141   96656 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 22:25:59.455153   96656 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 22:25:59.455172   96656 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:25:59.455183   96656 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 22:25:59.455194   96656 command_runner.go:130] > # Cgroup setting for conmon
	I1212 22:25:59.455208   96656 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 22:25:59.455217   96656 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 22:25:59.455228   96656 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 22:25:59.455253   96656 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 22:25:59.455273   96656 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:25:59.455283   96656 command_runner.go:130] > conmon_env = [
	I1212 22:25:59.455293   96656 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 22:25:59.455301   96656 command_runner.go:130] > ]
	I1212 22:25:59.455309   96656 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 22:25:59.455317   96656 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 22:25:59.455323   96656 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 22:25:59.455327   96656 command_runner.go:130] > # default_env = [
	I1212 22:25:59.455331   96656 command_runner.go:130] > # ]
	I1212 22:25:59.455337   96656 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 22:25:59.455343   96656 command_runner.go:130] > # selinux = false
	I1212 22:25:59.455352   96656 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 22:25:59.455362   96656 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 22:25:59.455378   96656 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 22:25:59.455388   96656 command_runner.go:130] > # seccomp_profile = ""
	I1212 22:25:59.455401   96656 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 22:25:59.455413   96656 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 22:25:59.455426   96656 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 22:25:59.455435   96656 command_runner.go:130] > # which might increase security.
	I1212 22:25:59.455445   96656 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 22:25:59.455456   96656 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 22:25:59.455462   96656 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 22:25:59.455470   96656 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 22:25:59.455476   96656 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 22:25:59.455484   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:25:59.455488   96656 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 22:25:59.455494   96656 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 22:25:59.455498   96656 command_runner.go:130] > # the cgroup blockio controller.
	I1212 22:25:59.455522   96656 command_runner.go:130] > # blockio_config_file = ""
	I1212 22:25:59.455542   96656 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 22:25:59.455553   96656 command_runner.go:130] > # irqbalance daemon.
	I1212 22:25:59.455564   96656 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 22:25:59.455577   96656 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 22:25:59.455590   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:25:59.455596   96656 command_runner.go:130] > # rdt_config_file = ""
	I1212 22:25:59.455604   96656 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 22:25:59.455610   96656 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 22:25:59.455619   96656 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 22:25:59.455628   96656 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 22:25:59.455642   96656 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 22:25:59.455656   96656 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 22:25:59.455666   96656 command_runner.go:130] > # will be added.
	I1212 22:25:59.455674   96656 command_runner.go:130] > # default_capabilities = [
	I1212 22:25:59.455681   96656 command_runner.go:130] > # 	"CHOWN",
	I1212 22:25:59.455691   96656 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 22:25:59.455698   96656 command_runner.go:130] > # 	"FSETID",
	I1212 22:25:59.455704   96656 command_runner.go:130] > # 	"FOWNER",
	I1212 22:25:59.455737   96656 command_runner.go:130] > # 	"SETGID",
	I1212 22:25:59.455747   96656 command_runner.go:130] > # 	"SETUID",
	I1212 22:25:59.455754   96656 command_runner.go:130] > # 	"SETPCAP",
	I1212 22:25:59.455761   96656 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 22:25:59.455771   96656 command_runner.go:130] > # 	"KILL",
	I1212 22:25:59.455777   96656 command_runner.go:130] > # ]
	I1212 22:25:59.455790   96656 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 22:25:59.455800   96656 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:25:59.455810   96656 command_runner.go:130] > # default_sysctls = [
	I1212 22:25:59.455816   96656 command_runner.go:130] > # ]
	I1212 22:25:59.455833   96656 command_runner.go:130] > # List of devices on the host that a
	I1212 22:25:59.455845   96656 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 22:25:59.455855   96656 command_runner.go:130] > # allowed_devices = [
	I1212 22:25:59.455879   96656 command_runner.go:130] > # 	"/dev/fuse",
	I1212 22:25:59.455888   96656 command_runner.go:130] > # ]
	I1212 22:25:59.455897   96656 command_runner.go:130] > # List of additional devices. specified as
	I1212 22:25:59.455912   96656 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 22:25:59.455923   96656 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 22:25:59.455975   96656 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:25:59.455987   96656 command_runner.go:130] > # additional_devices = [
	I1212 22:25:59.455993   96656 command_runner.go:130] > # ]
	I1212 22:25:59.456002   96656 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 22:25:59.456011   96656 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 22:25:59.456015   96656 command_runner.go:130] > # 	"/etc/cdi",
	I1212 22:25:59.456020   96656 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 22:25:59.456023   96656 command_runner.go:130] > # ]
	I1212 22:25:59.456031   96656 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 22:25:59.456038   96656 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 22:25:59.456042   96656 command_runner.go:130] > # Defaults to false.
	I1212 22:25:59.456065   96656 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 22:25:59.456075   96656 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 22:25:59.456090   96656 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 22:25:59.456100   96656 command_runner.go:130] > # hooks_dir = [
	I1212 22:25:59.456109   96656 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 22:25:59.456118   96656 command_runner.go:130] > # ]
	I1212 22:25:59.456128   96656 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 22:25:59.456144   96656 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 22:25:59.456154   96656 command_runner.go:130] > # its default mounts from the following two files:
	I1212 22:25:59.456163   96656 command_runner.go:130] > #
	I1212 22:25:59.456173   96656 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 22:25:59.456187   96656 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 22:25:59.456199   96656 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 22:25:59.456208   96656 command_runner.go:130] > #
	I1212 22:25:59.456219   96656 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 22:25:59.456233   96656 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 22:25:59.456244   96656 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 22:25:59.456255   96656 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 22:25:59.456270   96656 command_runner.go:130] > #
	I1212 22:25:59.456282   96656 command_runner.go:130] > # default_mounts_file = ""
	I1212 22:25:59.456294   96656 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 22:25:59.456309   96656 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 22:25:59.456317   96656 command_runner.go:130] > pids_limit = 1024
	I1212 22:25:59.456331   96656 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 22:25:59.456345   96656 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 22:25:59.456365   96656 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 22:25:59.456383   96656 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 22:25:59.456414   96656 command_runner.go:130] > # log_size_max = -1
	I1212 22:25:59.456429   96656 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 22:25:59.456439   96656 command_runner.go:130] > # log_to_journald = false
	I1212 22:25:59.456451   96656 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 22:25:59.456465   96656 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 22:25:59.456474   96656 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 22:25:59.456483   96656 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 22:25:59.456492   96656 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 22:25:59.456499   96656 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 22:25:59.456512   96656 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 22:25:59.456521   96656 command_runner.go:130] > # read_only = false
	I1212 22:25:59.456531   96656 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 22:25:59.456545   96656 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 22:25:59.456555   96656 command_runner.go:130] > # live configuration reload.
	I1212 22:25:59.456566   96656 command_runner.go:130] > # log_level = "info"
	I1212 22:25:59.456576   96656 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 22:25:59.456592   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:25:59.456600   96656 command_runner.go:130] > # log_filter = ""
	I1212 22:25:59.456618   96656 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 22:25:59.456631   96656 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 22:25:59.456641   96656 command_runner.go:130] > # separated by comma.
	I1212 22:25:59.456648   96656 command_runner.go:130] > # uid_mappings = ""
	I1212 22:25:59.456660   96656 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 22:25:59.456673   96656 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 22:25:59.456683   96656 command_runner.go:130] > # separated by comma.
	I1212 22:25:59.456691   96656 command_runner.go:130] > # gid_mappings = ""
	I1212 22:25:59.456704   96656 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 22:25:59.456715   96656 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:25:59.456728   96656 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:25:59.456738   96656 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 22:25:59.456752   96656 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 22:25:59.456765   96656 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:25:59.456779   96656 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:25:59.456789   96656 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 22:25:59.456807   96656 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 22:25:59.456819   96656 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 22:25:59.456825   96656 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 22:25:59.456831   96656 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 22:25:59.456837   96656 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 22:25:59.456845   96656 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 22:25:59.456850   96656 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 22:25:59.456860   96656 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 22:25:59.456870   96656 command_runner.go:130] > drop_infra_ctr = false
	I1212 22:25:59.456883   96656 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 22:25:59.456895   96656 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 22:25:59.456910   96656 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 22:25:59.456939   96656 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 22:25:59.456953   96656 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 22:25:59.456965   96656 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 22:25:59.456976   96656 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 22:25:59.456988   96656 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 22:25:59.456998   96656 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 22:25:59.457015   96656 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 22:25:59.457028   96656 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 22:25:59.457037   96656 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 22:25:59.457042   96656 command_runner.go:130] > # default_runtime = "runc"
	I1212 22:25:59.457049   96656 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 22:25:59.457057   96656 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 22:25:59.457068   96656 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 22:25:59.457075   96656 command_runner.go:130] > # creation as a file is not desired either.
	I1212 22:25:59.457083   96656 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 22:25:59.457088   96656 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 22:25:59.457096   96656 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 22:25:59.457099   96656 command_runner.go:130] > # ]
	I1212 22:25:59.457105   96656 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 22:25:59.457118   96656 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 22:25:59.457132   96656 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 22:25:59.457146   96656 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 22:25:59.457154   96656 command_runner.go:130] > #
	I1212 22:25:59.457165   96656 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 22:25:59.457180   96656 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 22:25:59.457190   96656 command_runner.go:130] > #  runtime_type = "oci"
	I1212 22:25:59.457199   96656 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 22:25:59.457210   96656 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 22:25:59.457219   96656 command_runner.go:130] > #  allowed_annotations = []
	I1212 22:25:59.457225   96656 command_runner.go:130] > # Where:
	I1212 22:25:59.457238   96656 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 22:25:59.457252   96656 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 22:25:59.457270   96656 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 22:25:59.457282   96656 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 22:25:59.457296   96656 command_runner.go:130] > #   in $PATH.
	I1212 22:25:59.457309   96656 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 22:25:59.457320   96656 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 22:25:59.457333   96656 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 22:25:59.457342   96656 command_runner.go:130] > #   state.
	I1212 22:25:59.457349   96656 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 22:25:59.457361   96656 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 22:25:59.457375   96656 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 22:25:59.457390   96656 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 22:25:59.457404   96656 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 22:25:59.457418   96656 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 22:25:59.457428   96656 command_runner.go:130] > #   The currently recognized values are:
	I1212 22:25:59.457443   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 22:25:59.457457   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 22:25:59.457471   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 22:25:59.457484   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 22:25:59.457496   96656 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 22:25:59.457510   96656 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 22:25:59.457523   96656 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 22:25:59.457532   96656 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 22:25:59.457537   96656 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 22:25:59.457542   96656 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 22:25:59.457547   96656 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 22:25:59.457556   96656 command_runner.go:130] > runtime_type = "oci"
	I1212 22:25:59.457563   96656 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 22:25:59.457570   96656 command_runner.go:130] > runtime_config_path = ""
	I1212 22:25:59.457585   96656 command_runner.go:130] > monitor_path = ""
	I1212 22:25:59.457596   96656 command_runner.go:130] > monitor_cgroup = ""
	I1212 22:25:59.457607   96656 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 22:25:59.457620   96656 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 22:25:59.457630   96656 command_runner.go:130] > # running containers
	I1212 22:25:59.457640   96656 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 22:25:59.457651   96656 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 22:25:59.457716   96656 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 22:25:59.457731   96656 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 22:25:59.457739   96656 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 22:25:59.457747   96656 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 22:25:59.457781   96656 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 22:25:59.457793   96656 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 22:25:59.457802   96656 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 22:25:59.457812   96656 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 22:25:59.457829   96656 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 22:25:59.457841   96656 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 22:25:59.457852   96656 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 22:25:59.457870   96656 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 22:25:59.457885   96656 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 22:25:59.457897   96656 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 22:25:59.457915   96656 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 22:25:59.457932   96656 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 22:25:59.457945   96656 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 22:25:59.457959   96656 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 22:25:59.457969   96656 command_runner.go:130] > # Example:
	I1212 22:25:59.457977   96656 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 22:25:59.457988   96656 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 22:25:59.457996   96656 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 22:25:59.458004   96656 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 22:25:59.458013   96656 command_runner.go:130] > # cpuset = 0
	I1212 22:25:59.458022   96656 command_runner.go:130] > # cpushares = "0-1"
	I1212 22:25:59.458028   96656 command_runner.go:130] > # Where:
	I1212 22:25:59.458039   96656 command_runner.go:130] > # The workload name is workload-type.
	I1212 22:25:59.458054   96656 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 22:25:59.458066   96656 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 22:25:59.458094   96656 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 22:25:59.458117   96656 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 22:25:59.458131   96656 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 22:25:59.458140   96656 command_runner.go:130] > # 
	I1212 22:25:59.458154   96656 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 22:25:59.458162   96656 command_runner.go:130] > #
	I1212 22:25:59.458176   96656 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 22:25:59.458189   96656 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 22:25:59.458199   96656 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 22:25:59.458211   96656 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 22:25:59.458224   96656 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 22:25:59.458234   96656 command_runner.go:130] > [crio.image]
	I1212 22:25:59.458247   96656 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 22:25:59.458257   96656 command_runner.go:130] > # default_transport = "docker://"
	I1212 22:25:59.458275   96656 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 22:25:59.458287   96656 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:25:59.458294   96656 command_runner.go:130] > # global_auth_file = ""
	I1212 22:25:59.458307   96656 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 22:25:59.458323   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:25:59.458334   96656 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 22:25:59.458348   96656 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 22:25:59.458361   96656 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:25:59.458373   96656 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:25:59.458383   96656 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 22:25:59.458391   96656 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 22:25:59.458404   96656 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 22:25:59.458421   96656 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 22:25:59.458435   96656 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 22:25:59.458445   96656 command_runner.go:130] > # pause_command = "/pause"
	I1212 22:25:59.458457   96656 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 22:25:59.458470   96656 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 22:25:59.458482   96656 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 22:25:59.458491   96656 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 22:25:59.458503   96656 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 22:25:59.458514   96656 command_runner.go:130] > # signature_policy = ""
	I1212 22:25:59.458524   96656 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 22:25:59.458543   96656 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 22:25:59.458552   96656 command_runner.go:130] > # changing them here.
	I1212 22:25:59.458563   96656 command_runner.go:130] > # insecure_registries = [
	I1212 22:25:59.458572   96656 command_runner.go:130] > # ]
	I1212 22:25:59.458583   96656 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 22:25:59.458591   96656 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 22:25:59.458602   96656 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 22:25:59.458613   96656 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 22:25:59.458622   96656 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 22:25:59.458660   96656 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 22:25:59.458670   96656 command_runner.go:130] > # CNI plugins.
	I1212 22:25:59.458679   96656 command_runner.go:130] > [crio.network]
	I1212 22:25:59.458687   96656 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 22:25:59.458699   96656 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 22:25:59.458710   96656 command_runner.go:130] > # cni_default_network = ""
	I1212 22:25:59.458720   96656 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 22:25:59.458731   96656 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 22:25:59.458744   96656 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 22:25:59.458757   96656 command_runner.go:130] > # plugin_dirs = [
	I1212 22:25:59.458766   96656 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 22:25:59.458775   96656 command_runner.go:130] > # ]
	I1212 22:25:59.458784   96656 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 22:25:59.458792   96656 command_runner.go:130] > [crio.metrics]
	I1212 22:25:59.458804   96656 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 22:25:59.458813   96656 command_runner.go:130] > enable_metrics = true
	I1212 22:25:59.458824   96656 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 22:25:59.458835   96656 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 22:25:59.458848   96656 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 22:25:59.458861   96656 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 22:25:59.458878   96656 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 22:25:59.458888   96656 command_runner.go:130] > # metrics_collectors = [
	I1212 22:25:59.458898   96656 command_runner.go:130] > # 	"operations",
	I1212 22:25:59.458908   96656 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 22:25:59.458919   96656 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 22:25:59.458929   96656 command_runner.go:130] > # 	"operations_errors",
	I1212 22:25:59.458939   96656 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 22:25:59.458953   96656 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 22:25:59.458963   96656 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 22:25:59.458970   96656 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 22:25:59.458974   96656 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 22:25:59.458985   96656 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 22:25:59.458992   96656 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 22:25:59.459002   96656 command_runner.go:130] > # 	"containers_oom_total",
	I1212 22:25:59.459009   96656 command_runner.go:130] > # 	"containers_oom",
	I1212 22:25:59.459019   96656 command_runner.go:130] > # 	"processes_defunct",
	I1212 22:25:59.459027   96656 command_runner.go:130] > # 	"operations_total",
	I1212 22:25:59.459037   96656 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 22:25:59.459048   96656 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 22:25:59.459057   96656 command_runner.go:130] > # 	"operations_errors_total",
	I1212 22:25:59.459072   96656 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 22:25:59.459079   96656 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 22:25:59.459086   96656 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 22:25:59.459096   96656 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 22:25:59.459111   96656 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 22:25:59.459124   96656 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 22:25:59.459133   96656 command_runner.go:130] > # ]
	I1212 22:25:59.459142   96656 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 22:25:59.459300   96656 command_runner.go:130] > # metrics_port = 9090
	I1212 22:25:59.459316   96656 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 22:25:59.459323   96656 command_runner.go:130] > # metrics_socket = ""
	I1212 22:25:59.459335   96656 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 22:25:59.459345   96656 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 22:25:59.459352   96656 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 22:25:59.459359   96656 command_runner.go:130] > # certificate on any modification event.
	I1212 22:25:59.459363   96656 command_runner.go:130] > # metrics_cert = ""
	I1212 22:25:59.459371   96656 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 22:25:59.459376   96656 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 22:25:59.459382   96656 command_runner.go:130] > # metrics_key = ""
	I1212 22:25:59.459388   96656 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 22:25:59.459394   96656 command_runner.go:130] > [crio.tracing]
	I1212 22:25:59.459400   96656 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 22:25:59.459466   96656 command_runner.go:130] > # enable_tracing = false
	I1212 22:25:59.459486   96656 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 22:25:59.459497   96656 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 22:25:59.459505   96656 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 22:25:59.459516   96656 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 22:25:59.459529   96656 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 22:25:59.459539   96656 command_runner.go:130] > [crio.stats]
	I1212 22:25:59.459552   96656 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 22:25:59.459564   96656 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 22:25:59.459575   96656 command_runner.go:130] > # stats_collection_period = 0
	I1212 22:25:59.459608   96656 command_runner.go:130] ! time="2023-12-12 22:25:59.433569329Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 22:25:59.459628   96656 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 22:25:59.459724   96656 cni.go:84] Creating CNI manager for ""
	I1212 22:25:59.459737   96656 cni.go:136] 2 nodes found, recommending kindnet
	I1212 22:25:59.459747   96656 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:25:59.459767   96656 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-054207 NodeName:multinode-054207-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:25:59.459881   96656 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-054207-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:25:59.459928   96656 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-054207-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:25:59.459987   96656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:25:59.468658   96656 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1212 22:25:59.468705   96656 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1212 22:25:59.468748   96656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1212 22:25:59.477344   96656 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1212 22:25:59.477378   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1212 22:25:59.477381   96656 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1212 22:25:59.477387   96656 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1212 22:25:59.477470   96656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1212 22:25:59.481920   96656 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1212 22:25:59.481957   96656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1212 22:25:59.481977   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1212 22:26:00.083456   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1212 22:26:00.083546   96656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1212 22:26:00.088352   96656 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1212 22:26:00.088555   96656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1212 22:26:00.088675   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1212 22:26:00.474513   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:26:00.488691   96656 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1212 22:26:00.488791   96656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1212 22:26:00.493532   96656 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1212 22:26:00.493579   96656 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1212 22:26:00.493603   96656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1212 22:26:01.018139   96656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 22:26:01.028727   96656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 22:26:01.044917   96656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:26:01.061029   96656 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1212 22:26:01.065200   96656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:26:01.077339   96656 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:26:01.077562   96656 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:26:01.077734   96656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:26:01.077786   96656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:26:01.092676   96656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I1212 22:26:01.093071   96656 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:26:01.093545   96656 main.go:141] libmachine: Using API Version  1
	I1212 22:26:01.093570   96656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:26:01.093861   96656 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:26:01.094080   96656 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:26:01.094233   96656 start.go:304] JoinCluster: &{Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:26:01.094326   96656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 22:26:01.094343   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:26:01.097137   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:26:01.097565   96656 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:26:01.097599   96656 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:26:01.097706   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:26:01.097882   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:26:01.098042   96656 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:26:01.098241   96656 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:26:01.287661   96656 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token grvswk.tl1cuc7kqwuf9cds --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 22:26:01.290613   96656 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:26:01.290651   96656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token grvswk.tl1cuc7kqwuf9cds --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-054207-m02"
	I1212 22:26:01.336153   96656 command_runner.go:130] ! W1212 22:26:01.324306     825 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 22:26:01.458458   96656 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:26:04.177263   96656 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 22:26:04.177302   96656 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 22:26:04.177329   96656 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 22:26:04.177343   96656 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:26:04.177356   96656 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:26:04.177364   96656 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 22:26:04.177374   96656 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 22:26:04.177396   96656 command_runner.go:130] > This node has joined the cluster:
	I1212 22:26:04.177408   96656 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 22:26:04.177418   96656 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 22:26:04.177428   96656 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 22:26:04.177450   96656 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token grvswk.tl1cuc7kqwuf9cds --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-054207-m02": (2.886785325s)
	I1212 22:26:04.177475   96656 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 22:26:04.463303   96656 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1212 22:26:04.463422   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-054207 minikube.k8s.io/updated_at=2023_12_12T22_26_04_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:26:04.573565   96656 command_runner.go:130] > node/multinode-054207-m02 labeled
	I1212 22:26:04.575717   96656 start.go:306] JoinCluster complete in 3.481478874s
	I1212 22:26:04.575746   96656 cni.go:84] Creating CNI manager for ""
	I1212 22:26:04.575752   96656 cni.go:136] 2 nodes found, recommending kindnet
	I1212 22:26:04.575818   96656 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:26:04.581798   96656 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 22:26:04.581823   96656 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 22:26:04.581830   96656 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 22:26:04.581836   96656 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:26:04.581842   96656 command_runner.go:130] > Access: 2023-12-12 22:24:38.651278855 +0000
	I1212 22:26:04.581847   96656 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 22:26:04.581852   96656 command_runner.go:130] > Change: 2023-12-12 22:24:36.827278855 +0000
	I1212 22:26:04.581855   96656 command_runner.go:130] >  Birth: -
	I1212 22:26:04.581909   96656 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:26:04.581924   96656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:26:04.602579   96656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:26:04.933709   96656 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:26:04.933740   96656 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:26:04.933750   96656 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 22:26:04.933758   96656 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 22:26:04.934170   96656 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:26:04.934385   96656 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:26:04.934698   96656 round_trippers.go:463] GET https://192.168.39.172:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:26:04.934711   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:04.934720   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:04.934725   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:04.937052   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:04.937067   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:04.937074   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:04 GMT
	I1212 22:26:04.937080   96656 round_trippers.go:580]     Audit-Id: f6c85a6a-bcff-4cc8-8d95-7492fd53bf45
	I1212 22:26:04.937086   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:04.937091   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:04.937097   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:04.937102   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:04.937112   96656 round_trippers.go:580]     Content-Length: 291
	I1212 22:26:04.937133   96656 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6f2af7e-14ec-48d1-9818-c77045ad4244","resourceVersion":"449","creationTimestamp":"2023-12-12T22:25:10Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 22:26:04.937226   96656 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-054207" context rescaled to 1 replicas
	I1212 22:26:04.937254   96656 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:26:04.939230   96656 out.go:177] * Verifying Kubernetes components...
	I1212 22:26:04.940661   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:26:04.960980   96656 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:26:04.961299   96656 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:26:04.961630   96656 node_ready.go:35] waiting up to 6m0s for node "multinode-054207-m02" to be "Ready" ...
	I1212 22:26:04.961716   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:04.961727   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:04.961740   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:04.961754   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:04.965060   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:04.965080   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:04.965086   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:04.965093   96656 round_trippers.go:580]     Content-Length: 4082
	I1212 22:26:04.965098   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:04 GMT
	I1212 22:26:04.965105   96656 round_trippers.go:580]     Audit-Id: 02592d77-b60d-471c-a367-0f34e906dc2e
	I1212 22:26:04.965112   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:04.965120   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:04.965127   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:04.965260   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"502","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 22:26:04.965588   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:04.965603   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:04.965614   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:04.965623   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:04.968409   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:04.968433   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:04.968445   96656 round_trippers.go:580]     Audit-Id: 107af229-0c10-4270-96fc-d6b2755e6b63
	I1212 22:26:04.968454   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:04.968463   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:04.968475   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:04.968482   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:04.968492   96656 round_trippers.go:580]     Content-Length: 4082
	I1212 22:26:04.968504   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:04 GMT
	I1212 22:26:04.968589   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"502","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 22:26:05.470106   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:05.470134   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:05.470148   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:05.470158   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:05.472949   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:05.472975   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:05.472985   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:05.472994   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:05.473003   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:05.473011   96656 round_trippers.go:580]     Content-Length: 4082
	I1212 22:26:05.473019   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:05 GMT
	I1212 22:26:05.473027   96656 round_trippers.go:580]     Audit-Id: 3fef692d-88ee-4558-b6da-0fbb0bcc6cf9
	I1212 22:26:05.473045   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:05.473149   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"502","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 22:26:05.969752   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:05.969777   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:05.969787   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:05.969792   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:05.973109   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:05.973136   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:05.973144   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:05.973150   96656 round_trippers.go:580]     Content-Length: 4082
	I1212 22:26:05.973155   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:05 GMT
	I1212 22:26:05.973160   96656 round_trippers.go:580]     Audit-Id: e261898f-fb16-4af3-9151-c706b1467674
	I1212 22:26:05.973165   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:05.973170   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:05.973178   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:05.973283   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"502","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 22:26:06.469153   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:06.469180   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:06.469189   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:06.469195   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:06.473074   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:06.473100   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:06.473108   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:06 GMT
	I1212 22:26:06.473113   96656 round_trippers.go:580]     Audit-Id: 6d9d7e45-797a-41d4-b2d8-8c45b7b9f41a
	I1212 22:26:06.473120   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:06.473128   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:06.473137   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:06.473146   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:06.473156   96656 round_trippers.go:580]     Content-Length: 4082
	I1212 22:26:06.473227   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"502","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 22:26:06.969863   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:06.969891   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:06.969900   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:06.969906   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:06.973891   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:06.973925   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:06.973936   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:06.973945   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:06.973954   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:06.973962   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:06.973970   96656 round_trippers.go:580]     Content-Length: 4082
	I1212 22:26:06.973979   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:06 GMT
	I1212 22:26:06.973989   96656 round_trippers.go:580]     Audit-Id: 88eecda5-a2bf-44e6-a645-06a66d401168
	I1212 22:26:06.974108   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"502","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 22:26:06.974446   96656 node_ready.go:58] node "multinode-054207-m02" has status "Ready":"False"
	I1212 22:26:07.469549   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:07.469571   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:07.469581   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:07.469586   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:07.472782   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:07.472803   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:07.472810   96656 round_trippers.go:580]     Audit-Id: d4945402-4990-4ba2-907a-38bc2b3845fd
	I1212 22:26:07.472816   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:07.472821   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:07.472826   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:07.472831   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:07.472836   96656 round_trippers.go:580]     Content-Length: 4082
	I1212 22:26:07.472844   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:07 GMT
	I1212 22:26:07.472943   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"502","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1212 22:26:07.970113   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:07.970138   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:07.970148   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:07.970154   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:07.973203   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:07.973232   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:07.973244   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:07.973254   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:07 GMT
	I1212 22:26:07.973264   96656 round_trippers.go:580]     Audit-Id: cfa63462-0379-46d9-94ee-da160405eca2
	I1212 22:26:07.973272   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:07.973281   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:07.973294   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:07.973700   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:08.469363   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:08.469390   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:08.469399   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:08.469406   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:08.472857   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:08.472883   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:08.472899   96656 round_trippers.go:580]     Audit-Id: 37d6c54f-8659-43f0-a1e1-9cc07ae04529
	I1212 22:26:08.472907   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:08.472916   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:08.472923   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:08.472928   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:08.472933   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:08 GMT
	I1212 22:26:08.473211   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:08.969948   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:08.969972   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:08.969981   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:08.969990   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:08.973882   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:08.973911   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:08.973922   96656 round_trippers.go:580]     Audit-Id: ec1c9c45-a4e8-4ced-83d4-7b6510e91c29
	I1212 22:26:08.973929   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:08.973936   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:08.973944   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:08.973951   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:08.973959   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:08 GMT
	I1212 22:26:08.974388   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:08.974665   96656 node_ready.go:58] node "multinode-054207-m02" has status "Ready":"False"
	I1212 22:26:09.470093   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:09.470117   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:09.470126   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:09.470133   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:09.473367   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:09.473389   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:09.473396   96656 round_trippers.go:580]     Audit-Id: d268c259-0873-4d36-b7d2-02f16516c753
	I1212 22:26:09.473401   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:09.473407   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:09.473415   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:09.473424   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:09.473431   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:09 GMT
	I1212 22:26:09.473872   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:09.969566   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:09.969600   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:09.969613   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:09.969624   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:09.973585   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:09.973616   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:09.973638   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:09.973657   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:09 GMT
	I1212 22:26:09.973665   96656 round_trippers.go:580]     Audit-Id: 7e2f93bd-2756-4b07-8cba-809d6b05b8b3
	I1212 22:26:09.973673   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:09.973684   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:09.973697   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:09.974070   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:10.469506   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:10.469534   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:10.469546   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:10.469553   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:10.472557   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:10.472590   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:10.472600   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:10.472608   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:10.472615   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:10.472624   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:10.472632   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:10 GMT
	I1212 22:26:10.472641   96656 round_trippers.go:580]     Audit-Id: 0d6ae1b2-1c8f-412d-82a3-4c309cb3ba00
	I1212 22:26:10.472772   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:10.969424   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:10.969456   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:10.969469   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:10.969479   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:10.972530   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:10.972552   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:10.972560   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:10.972565   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:10 GMT
	I1212 22:26:10.972570   96656 round_trippers.go:580]     Audit-Id: 9ce6061e-6a4f-4fda-8ecd-fd4f16612349
	I1212 22:26:10.972575   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:10.972580   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:10.972595   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:10.972983   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:11.469273   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:11.469299   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:11.469316   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:11.469322   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:11.472478   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:11.472501   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:11.472509   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:11.472514   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:11.472519   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:11 GMT
	I1212 22:26:11.472524   96656 round_trippers.go:580]     Audit-Id: 82810429-832a-4b1f-a549-95d136698c28
	I1212 22:26:11.472530   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:11.472535   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:11.472888   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:11.473215   96656 node_ready.go:58] node "multinode-054207-m02" has status "Ready":"False"
	I1212 22:26:11.969577   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:11.969612   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:11.969627   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:11.969638   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:11.972672   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:11.972694   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:11.972701   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:11.972707   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:11 GMT
	I1212 22:26:11.972712   96656 round_trippers.go:580]     Audit-Id: 74930f87-a369-4483-9433-a43ce8fad734
	I1212 22:26:11.972717   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:11.972722   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:11.972727   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:11.972961   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"505","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3167 chars]
	I1212 22:26:12.469691   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:12.469718   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.469726   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.469732   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.472715   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.472736   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.472749   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.472755   96656 round_trippers.go:580]     Audit-Id: 7637d20f-4713-4af8-8c7d-796092ee4051
	I1212 22:26:12.472760   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.472765   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.472771   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.472779   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.473494   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"526","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3253 chars]
	I1212 22:26:12.473751   96656 node_ready.go:49] node "multinode-054207-m02" has status "Ready":"True"
	I1212 22:26:12.473770   96656 node_ready.go:38] duration metric: took 7.512122361s waiting for node "multinode-054207-m02" to be "Ready" ...
	I1212 22:26:12.473779   96656 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:26:12.473846   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:26:12.473854   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.473861   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.473867   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.477724   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:12.477742   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.477748   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.477754   96656 round_trippers.go:580]     Audit-Id: 4727cd4c-5d21-48a0-a9d0-808b7ac200c9
	I1212 22:26:12.477759   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.477764   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.477769   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.477774   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.479181   96656 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"527"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"445","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67364 chars]
	I1212 22:26:12.481184   96656 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.481273   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:26:12.481283   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.481291   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.481297   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.483936   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.483963   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.483973   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.483982   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.483989   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.483995   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.484000   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.484005   96656 round_trippers.go:580]     Audit-Id: 05ee177a-0992-45c0-a440-f283b005cf7e
	I1212 22:26:12.484269   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"445","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1212 22:26:12.484757   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:26:12.484771   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.484779   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.484784   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.486818   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.486835   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.486846   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.486853   96656 round_trippers.go:580]     Audit-Id: bff317f2-bb06-4383-bbad-897158f0c961
	I1212 22:26:12.486859   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.486864   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.486869   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.486875   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.487224   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:26:12.487543   96656 pod_ready.go:92] pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace has status "Ready":"True"
	I1212 22:26:12.487560   96656 pod_ready.go:81] duration metric: took 6.353755ms waiting for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.487569   96656 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.487624   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:26:12.487633   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.487640   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.487645   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.489782   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.489802   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.489812   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.489820   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.489826   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.489831   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.489840   96656 round_trippers.go:580]     Audit-Id: 940f227e-2223-4ec5-a70d-db1c460f46d4
	I1212 22:26:12.489845   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.489992   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"439","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1212 22:26:12.490368   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:26:12.490381   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.490388   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.490394   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.492635   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.492650   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.492655   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.492661   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.492666   96656 round_trippers.go:580]     Audit-Id: 3f6540b8-d13f-4490-8c0e-1aa2b3dd12f2
	I1212 22:26:12.492671   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.492676   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.492680   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.492974   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:26:12.493262   96656 pod_ready.go:92] pod "etcd-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:26:12.493283   96656 pod_ready.go:81] duration metric: took 5.708427ms waiting for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.493297   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.493359   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-054207
	I1212 22:26:12.493368   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.493377   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.493385   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.495553   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.495567   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.495573   96656 round_trippers.go:580]     Audit-Id: f7ca8207-d439-48ad-9db4-db15292dfa19
	I1212 22:26:12.495578   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.495583   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.495589   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.495594   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.495599   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.496113   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-054207","namespace":"kube-system","uid":"70bc63a6-e544-401c-90ae-7473ce8343da","resourceVersion":"441","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.172:8443","kubernetes.io/config.hash":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.mirror":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.seen":"2023-12-12T22:25:10.498243509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1212 22:26:12.496478   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:26:12.496490   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.496497   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.496503   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.499103   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.499121   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.499128   96656 round_trippers.go:580]     Audit-Id: 889e3805-b65e-4f30-bed4-3da747ec09e7
	I1212 22:26:12.499133   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.499138   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.499143   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.499148   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.499153   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.499378   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:26:12.499659   96656 pod_ready.go:92] pod "kube-apiserver-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:26:12.499673   96656 pod_ready.go:81] duration metric: took 6.361533ms waiting for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.499683   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.499731   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-054207
	I1212 22:26:12.499738   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.499746   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.499751   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.504708   96656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:26:12.504727   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.504734   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.504739   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.504744   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.504750   96656 round_trippers.go:580]     Audit-Id: 70604e7a-ed98-4e38-ad38-9f66eb45ebe7
	I1212 22:26:12.504755   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.504760   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.504919   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-054207","namespace":"kube-system","uid":"9040c58b-7f77-4355-880f-991c010720f7","resourceVersion":"374","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.mirror":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.seen":"2023-12-12T22:25:10.498244800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1212 22:26:12.505323   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:26:12.505337   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.505344   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.505350   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.507696   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.507718   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.507727   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.507741   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.507751   96656 round_trippers.go:580]     Audit-Id: b4dd6a51-3ba8-427d-b1ea-aa665183b775
	I1212 22:26:12.507765   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.507772   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.507783   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.507970   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:26:12.508354   96656 pod_ready.go:92] pod "kube-controller-manager-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:26:12.508380   96656 pod_ready.go:81] duration metric: took 8.689852ms waiting for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.508409   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.669796   96656 request.go:629] Waited for 161.296057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:26:12.669869   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:26:12.669875   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.669883   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.669890   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.672644   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:12.672670   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.672681   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.672689   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.672701   96656 round_trippers.go:580]     Audit-Id: 4733da74-caad-44d2-b1db-fd761146c3e4
	I1212 22:26:12.672708   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.672715   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.672723   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.672891   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jtfmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d38d8816-bb76-4b9d-aa24-33744ec196fa","resourceVersion":"515","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 22:26:12.869884   96656 request.go:629] Waited for 196.448365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:12.869966   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:26:12.869979   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:12.869990   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:12.869999   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:12.875014   96656 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:26:12.875045   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:12.875058   96656 round_trippers.go:580]     Audit-Id: e321a2ca-080d-4604-8178-dbfc8e66518b
	I1212 22:26:12.875066   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:12.875075   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:12.875092   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:12.875109   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:12.875117   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:12 GMT
	I1212 22:26:12.876040   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"528","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_26_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3133 chars]
	I1212 22:26:12.876323   96656 pod_ready.go:92] pod "kube-proxy-jtfmt" in "kube-system" namespace has status "Ready":"True"
	I1212 22:26:12.876339   96656 pod_ready.go:81] duration metric: took 367.916279ms waiting for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:12.876349   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:13.069741   96656 request.go:629] Waited for 193.31589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:26:13.069823   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:26:13.069828   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:13.069837   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:13.069843   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:13.072479   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:13.072508   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:13.072517   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:13.072525   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:13.072535   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:13.072542   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:13.072552   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:13 GMT
	I1212 22:26:13.072561   96656 round_trippers.go:580]     Audit-Id: eee16e42-1540-447b-923f-743bb6d85c87
	I1212 22:26:13.072867   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rnx8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e8875d71-d50e-44f1-92c1-db1858b4b3bb","resourceVersion":"412","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1212 22:26:13.270697   96656 request.go:629] Waited for 197.362769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:26:13.270757   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:26:13.270762   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:13.270770   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:13.270775   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:13.273728   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:13.273756   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:13.273766   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:13.273774   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:13.273781   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:13 GMT
	I1212 22:26:13.273794   96656 round_trippers.go:580]     Audit-Id: 20e6295e-5b0d-4fa1-b419-659b8247646e
	I1212 22:26:13.273801   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:13.273811   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:13.274045   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:26:13.274410   96656 pod_ready.go:92] pod "kube-proxy-rnx8m" in "kube-system" namespace has status "Ready":"True"
	I1212 22:26:13.274427   96656 pod_ready.go:81] duration metric: took 398.071098ms waiting for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:13.274438   96656 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:13.469801   96656 request.go:629] Waited for 195.288315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:26:13.469886   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:26:13.469892   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:13.469900   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:13.469906   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:13.472567   96656 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:26:13.472588   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:13.472594   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:13.472602   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:13.472614   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:13 GMT
	I1212 22:26:13.472629   96656 round_trippers.go:580]     Audit-Id: 500459cb-ecda-465a-a4fa-d514761ef0c8
	I1212 22:26:13.472637   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:13.472648   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:13.472859   96656 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-054207","namespace":"kube-system","uid":"79f6cbd9-988a-4dc2-a910-15abd7598b9c","resourceVersion":"440","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.mirror":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.seen":"2023-12-12T22:25:01.374250221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1212 22:26:13.670715   96656 request.go:629] Waited for 197.355155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:26:13.670780   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:26:13.670785   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:13.670793   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:13.670800   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:13.674352   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:13.674404   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:13.674416   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:13 GMT
	I1212 22:26:13.674424   96656 round_trippers.go:580]     Audit-Id: fa273770-6d98-4f1a-8000-33104cc44e2e
	I1212 22:26:13.674433   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:13.674441   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:13.674449   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:13.674460   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:13.675662   96656 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1212 22:26:13.676093   96656 pod_ready.go:92] pod "kube-scheduler-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:26:13.676114   96656 pod_ready.go:81] duration metric: took 401.665177ms waiting for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:26:13.676125   96656 pod_ready.go:38] duration metric: took 1.202325145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:26:13.676141   96656 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:26:13.676188   96656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:26:13.691802   96656 system_svc.go:56] duration metric: took 15.649167ms WaitForService to wait for kubelet.
	I1212 22:26:13.691835   96656 kubeadm.go:581] duration metric: took 8.754556678s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:26:13.691859   96656 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:26:13.870311   96656 request.go:629] Waited for 178.361086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I1212 22:26:13.870369   96656 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I1212 22:26:13.870374   96656 round_trippers.go:469] Request Headers:
	I1212 22:26:13.870382   96656 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:26:13.870388   96656 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:26:13.873675   96656 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:26:13.873704   96656 round_trippers.go:577] Response Headers:
	I1212 22:26:13.873715   96656 round_trippers.go:580]     Audit-Id: ba18038f-36e3-44da-8cf8-7c02b1f255d5
	I1212 22:26:13.873722   96656 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:26:13.873731   96656 round_trippers.go:580]     Content-Type: application/json
	I1212 22:26:13.873739   96656 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:26:13.873747   96656 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:26:13.873755   96656 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:26:13 GMT
	I1212 22:26:13.874281   96656 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"528"},"items":[{"metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"422","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10077 chars]
	I1212 22:26:13.874719   96656 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:26:13.874736   96656 node_conditions.go:123] node cpu capacity is 2
	I1212 22:26:13.874746   96656 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:26:13.874751   96656 node_conditions.go:123] node cpu capacity is 2
	I1212 22:26:13.874754   96656 node_conditions.go:105] duration metric: took 182.889748ms to run NodePressure ...
	I1212 22:26:13.874766   96656 start.go:228] waiting for startup goroutines ...
	I1212 22:26:13.874793   96656 start.go:242] writing updated cluster config ...
	I1212 22:26:13.875118   96656 ssh_runner.go:195] Run: rm -f paused
	I1212 22:26:13.923736   96656 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 22:26:13.927073   96656 out.go:177] * Done! kubectl is now configured to use "multinode-054207" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 22:24:37 UTC, ends at Tue 2023-12-12 22:26:21 UTC. --
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.103768545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702419981103756038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6036b463-901f-42d2-8d57-c586799fae32 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.104498442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a7e3471a-fb56-4c27-a226-cce5c03daf57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.104577601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a7e3471a-fb56-4c27-a226-cce5c03daf57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.104802830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4e605c15286a6dd0158a0f6acebf4d0523298acf6ff0b2068ca0e409a265c50,PodSandboxId:fb9bc086370405ca863d3344a7f2dac3496955a83abf8c40a31d61424062d065,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702419976669917836,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7fg9p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 220bf84f-c796-488d-8673-554f240fda87,},Annotations:map[string]string{io.kubernetes.container.hash: 69a025c0,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd80a086ab49ba80cc221393fa84141576488bcd9a64c5c377cc30738e43ba3c,PodSandboxId:6994c3232e70902af39600fafdbeca8e530c51880373c8f0b015f7e0a2969ee8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702419929648986720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rj4p4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd5cacb-68c8-41e5-a91e-07e6a9739897,},Annotations:map[string]string{io.kubernetes.container.hash: b30da384,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b25a53e29597b71b08956373a51bedb338f083cf90b09dcd393e090fafef8b1,PodSandboxId:a487a44d83df802f1ed9f11f75b943f62b3ec8fe6fab5512584c64168cb04790,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702419929463560090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfb1185cc3b9e9c646ec7adc7e79054bf0dd765215e7b1b08d7c7246c498a6f4,PodSandboxId:c8f4920748e507a73c59029399d9d17ecf61fe4b77152285f193e7518960c8d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702419927020998001,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nj2sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 947b4acb-082a-436b-b68f-d253f391ee24,},Annotations:map[string]string{io.kubernetes.container.hash: 59150d93,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f58bda8451bd675b89b08c73f680476cd5ebf5df9d42dd66bb331c686bf00abd,PodSandboxId:8970aea82debb5af22c392453ac52c7b34ac1da9f81ce8309c8dae4778bed964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702419924943533691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnx8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8875d71-d50e-44f1-92c1-db1858
b4b3bb,},Annotations:map[string]string{io.kubernetes.container.hash: efaa442f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d06fadeaf3283c3da77e1d88ca69dac8f2a35f1c02b7b23646240127908db5,PodSandboxId:e55d77e3fa1356eb70841796a1e7937738c8990e8c72c6e9c955b597cd64d5ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702419902886656354,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c
ec9887dcff7004aa4082a4b73fb6ba,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056b250491421594025fa51eddfe421e34e9e41afd867cfc99df08f568a0639,PodSandboxId:b93ea41ceb702772f9c9c8a2c74b13d10267a91508820c8ffd2b37336cc791cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702419902727944890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767f78d84df6cc
4b5db4cd1537aebe27,},Annotations:map[string]string{io.kubernetes.container.hash: fa2c3a7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68652278af5a6b78eecc146a3554b77daf6c89d9016725cfa57a92dbaae46cc4,PodSandboxId:843632dc6a8f9f29f7c3dc1879edec62f5303e39f66c66287d3085031efb404e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702419902460770236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ea46a4c32716d2f532486a0df40c80,},Annotations:map[string]string{i
o.kubernetes.container.hash: 984c1859,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fbef6dce41459895c32f16c9559705143286340f2be9f88494e78aa0a8eeff,PodSandboxId:a9e9574d9051a8b3d6fc718b3715265df445dd3ce89ae9a837307531e2a72960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702419902489066303,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0decf830d069a663b6d187c356fa2e3f,},Annotations:map[string]string{io.kubernetes
.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a7e3471a-fb56-4c27-a226-cce5c03daf57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.148250217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=91b9266a-b5ed-438b-9d8c-c904555f3d61 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.148390930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=91b9266a-b5ed-438b-9d8c-c904555f3d61 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.150229278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1ec26d9a-692a-4209-8021-a6e4ca753c13 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.150665864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702419981150653056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1ec26d9a-692a-4209-8021-a6e4ca753c13 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.151273342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=29592abd-c619-4487-aa31-e66d8ffaef9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.151412511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=29592abd-c619-4487-aa31-e66d8ffaef9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.151599684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4e605c15286a6dd0158a0f6acebf4d0523298acf6ff0b2068ca0e409a265c50,PodSandboxId:fb9bc086370405ca863d3344a7f2dac3496955a83abf8c40a31d61424062d065,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702419976669917836,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7fg9p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 220bf84f-c796-488d-8673-554f240fda87,},Annotations:map[string]string{io.kubernetes.container.hash: 69a025c0,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd80a086ab49ba80cc221393fa84141576488bcd9a64c5c377cc30738e43ba3c,PodSandboxId:6994c3232e70902af39600fafdbeca8e530c51880373c8f0b015f7e0a2969ee8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702419929648986720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rj4p4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd5cacb-68c8-41e5-a91e-07e6a9739897,},Annotations:map[string]string{io.kubernetes.container.hash: b30da384,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b25a53e29597b71b08956373a51bedb338f083cf90b09dcd393e090fafef8b1,PodSandboxId:a487a44d83df802f1ed9f11f75b943f62b3ec8fe6fab5512584c64168cb04790,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702419929463560090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfb1185cc3b9e9c646ec7adc7e79054bf0dd765215e7b1b08d7c7246c498a6f4,PodSandboxId:c8f4920748e507a73c59029399d9d17ecf61fe4b77152285f193e7518960c8d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702419927020998001,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nj2sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 947b4acb-082a-436b-b68f-d253f391ee24,},Annotations:map[string]string{io.kubernetes.container.hash: 59150d93,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f58bda8451bd675b89b08c73f680476cd5ebf5df9d42dd66bb331c686bf00abd,PodSandboxId:8970aea82debb5af22c392453ac52c7b34ac1da9f81ce8309c8dae4778bed964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702419924943533691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnx8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8875d71-d50e-44f1-92c1-db1858
b4b3bb,},Annotations:map[string]string{io.kubernetes.container.hash: efaa442f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d06fadeaf3283c3da77e1d88ca69dac8f2a35f1c02b7b23646240127908db5,PodSandboxId:e55d77e3fa1356eb70841796a1e7937738c8990e8c72c6e9c955b597cd64d5ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702419902886656354,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c
ec9887dcff7004aa4082a4b73fb6ba,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056b250491421594025fa51eddfe421e34e9e41afd867cfc99df08f568a0639,PodSandboxId:b93ea41ceb702772f9c9c8a2c74b13d10267a91508820c8ffd2b37336cc791cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702419902727944890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767f78d84df6cc
4b5db4cd1537aebe27,},Annotations:map[string]string{io.kubernetes.container.hash: fa2c3a7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68652278af5a6b78eecc146a3554b77daf6c89d9016725cfa57a92dbaae46cc4,PodSandboxId:843632dc6a8f9f29f7c3dc1879edec62f5303e39f66c66287d3085031efb404e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702419902460770236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ea46a4c32716d2f532486a0df40c80,},Annotations:map[string]string{i
o.kubernetes.container.hash: 984c1859,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fbef6dce41459895c32f16c9559705143286340f2be9f88494e78aa0a8eeff,PodSandboxId:a9e9574d9051a8b3d6fc718b3715265df445dd3ce89ae9a837307531e2a72960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702419902489066303,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0decf830d069a663b6d187c356fa2e3f,},Annotations:map[string]string{io.kubernetes
.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=29592abd-c619-4487-aa31-e66d8ffaef9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.193379865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d63a6f5e-91da-4c56-b77f-9c6fe69ff08b name=/runtime.v1.RuntimeService/Version
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.193447293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d63a6f5e-91da-4c56-b77f-9c6fe69ff08b name=/runtime.v1.RuntimeService/Version
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.195084522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cf5f0dc7-aa98-43fc-8695-b6540e2b92d5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.195574426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702419981195559332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cf5f0dc7-aa98-43fc-8695-b6540e2b92d5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.196083976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39102156-c7b7-4b7d-9716-a6f4f88e22ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.196167027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39102156-c7b7-4b7d-9716-a6f4f88e22ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.196444607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4e605c15286a6dd0158a0f6acebf4d0523298acf6ff0b2068ca0e409a265c50,PodSandboxId:fb9bc086370405ca863d3344a7f2dac3496955a83abf8c40a31d61424062d065,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702419976669917836,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7fg9p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 220bf84f-c796-488d-8673-554f240fda87,},Annotations:map[string]string{io.kubernetes.container.hash: 69a025c0,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd80a086ab49ba80cc221393fa84141576488bcd9a64c5c377cc30738e43ba3c,PodSandboxId:6994c3232e70902af39600fafdbeca8e530c51880373c8f0b015f7e0a2969ee8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702419929648986720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rj4p4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd5cacb-68c8-41e5-a91e-07e6a9739897,},Annotations:map[string]string{io.kubernetes.container.hash: b30da384,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b25a53e29597b71b08956373a51bedb338f083cf90b09dcd393e090fafef8b1,PodSandboxId:a487a44d83df802f1ed9f11f75b943f62b3ec8fe6fab5512584c64168cb04790,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702419929463560090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfb1185cc3b9e9c646ec7adc7e79054bf0dd765215e7b1b08d7c7246c498a6f4,PodSandboxId:c8f4920748e507a73c59029399d9d17ecf61fe4b77152285f193e7518960c8d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702419927020998001,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nj2sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 947b4acb-082a-436b-b68f-d253f391ee24,},Annotations:map[string]string{io.kubernetes.container.hash: 59150d93,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f58bda8451bd675b89b08c73f680476cd5ebf5df9d42dd66bb331c686bf00abd,PodSandboxId:8970aea82debb5af22c392453ac52c7b34ac1da9f81ce8309c8dae4778bed964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702419924943533691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnx8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8875d71-d50e-44f1-92c1-db1858
b4b3bb,},Annotations:map[string]string{io.kubernetes.container.hash: efaa442f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d06fadeaf3283c3da77e1d88ca69dac8f2a35f1c02b7b23646240127908db5,PodSandboxId:e55d77e3fa1356eb70841796a1e7937738c8990e8c72c6e9c955b597cd64d5ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702419902886656354,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c
ec9887dcff7004aa4082a4b73fb6ba,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056b250491421594025fa51eddfe421e34e9e41afd867cfc99df08f568a0639,PodSandboxId:b93ea41ceb702772f9c9c8a2c74b13d10267a91508820c8ffd2b37336cc791cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702419902727944890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767f78d84df6cc
4b5db4cd1537aebe27,},Annotations:map[string]string{io.kubernetes.container.hash: fa2c3a7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68652278af5a6b78eecc146a3554b77daf6c89d9016725cfa57a92dbaae46cc4,PodSandboxId:843632dc6a8f9f29f7c3dc1879edec62f5303e39f66c66287d3085031efb404e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702419902460770236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ea46a4c32716d2f532486a0df40c80,},Annotations:map[string]string{i
o.kubernetes.container.hash: 984c1859,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fbef6dce41459895c32f16c9559705143286340f2be9f88494e78aa0a8eeff,PodSandboxId:a9e9574d9051a8b3d6fc718b3715265df445dd3ce89ae9a837307531e2a72960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702419902489066303,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0decf830d069a663b6d187c356fa2e3f,},Annotations:map[string]string{io.kubernetes
.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39102156-c7b7-4b7d-9716-a6f4f88e22ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.238277343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b3228a80-5059-480a-8042-bc8cf399054b name=/runtime.v1.RuntimeService/Version
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.238394264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b3228a80-5059-480a-8042-bc8cf399054b name=/runtime.v1.RuntimeService/Version
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.239843898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ffc144d5-8130-433d-b880-4e95260fdba1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.240199770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702419981240188675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ffc144d5-8130-433d-b880-4e95260fdba1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.240755881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b1177340-e944-4414-9ebb-75dcee81c09e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.240800993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b1177340-e944-4414-9ebb-75dcee81c09e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:26:21 multinode-054207 crio[723]: time="2023-12-12 22:26:21.240975690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4e605c15286a6dd0158a0f6acebf4d0523298acf6ff0b2068ca0e409a265c50,PodSandboxId:fb9bc086370405ca863d3344a7f2dac3496955a83abf8c40a31d61424062d065,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702419976669917836,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7fg9p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 220bf84f-c796-488d-8673-554f240fda87,},Annotations:map[string]string{io.kubernetes.container.hash: 69a025c0,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd80a086ab49ba80cc221393fa84141576488bcd9a64c5c377cc30738e43ba3c,PodSandboxId:6994c3232e70902af39600fafdbeca8e530c51880373c8f0b015f7e0a2969ee8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702419929648986720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rj4p4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd5cacb-68c8-41e5-a91e-07e6a9739897,},Annotations:map[string]string{io.kubernetes.container.hash: b30da384,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b25a53e29597b71b08956373a51bedb338f083cf90b09dcd393e090fafef8b1,PodSandboxId:a487a44d83df802f1ed9f11f75b943f62b3ec8fe6fab5512584c64168cb04790,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702419929463560090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfb1185cc3b9e9c646ec7adc7e79054bf0dd765215e7b1b08d7c7246c498a6f4,PodSandboxId:c8f4920748e507a73c59029399d9d17ecf61fe4b77152285f193e7518960c8d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702419927020998001,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nj2sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 947b4acb-082a-436b-b68f-d253f391ee24,},Annotations:map[string]string{io.kubernetes.container.hash: 59150d93,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f58bda8451bd675b89b08c73f680476cd5ebf5df9d42dd66bb331c686bf00abd,PodSandboxId:8970aea82debb5af22c392453ac52c7b34ac1da9f81ce8309c8dae4778bed964,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702419924943533691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnx8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8875d71-d50e-44f1-92c1-db1858
b4b3bb,},Annotations:map[string]string{io.kubernetes.container.hash: efaa442f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d06fadeaf3283c3da77e1d88ca69dac8f2a35f1c02b7b23646240127908db5,PodSandboxId:e55d77e3fa1356eb70841796a1e7937738c8990e8c72c6e9c955b597cd64d5ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702419902886656354,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c
ec9887dcff7004aa4082a4b73fb6ba,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9056b250491421594025fa51eddfe421e34e9e41afd867cfc99df08f568a0639,PodSandboxId:b93ea41ceb702772f9c9c8a2c74b13d10267a91508820c8ffd2b37336cc791cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702419902727944890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767f78d84df6cc
4b5db4cd1537aebe27,},Annotations:map[string]string{io.kubernetes.container.hash: fa2c3a7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68652278af5a6b78eecc146a3554b77daf6c89d9016725cfa57a92dbaae46cc4,PodSandboxId:843632dc6a8f9f29f7c3dc1879edec62f5303e39f66c66287d3085031efb404e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702419902460770236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ea46a4c32716d2f532486a0df40c80,},Annotations:map[string]string{i
o.kubernetes.container.hash: 984c1859,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fbef6dce41459895c32f16c9559705143286340f2be9f88494e78aa0a8eeff,PodSandboxId:a9e9574d9051a8b3d6fc718b3715265df445dd3ce89ae9a837307531e2a72960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702419902489066303,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0decf830d069a663b6d187c356fa2e3f,},Annotations:map[string]string{io.kubernetes
.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b1177340-e944-4414-9ebb-75dcee81c09e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e4e605c15286a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   fb9bc08637040       busybox-5bc68d56bd-7fg9p
	dd80a086ab49b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      51 seconds ago       Running             coredns                   0                   6994c3232e709       coredns-5dd5756b68-rj4p4
	8b25a53e29597       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      51 seconds ago       Running             storage-provisioner       0                   a487a44d83df8       storage-provisioner
	bfb1185cc3b9e       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      54 seconds ago       Running             kindnet-cni               0                   c8f4920748e50       kindnet-nj2sh
	f58bda8451bd6       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      56 seconds ago       Running             kube-proxy                0                   8970aea82debb       kube-proxy-rnx8m
	39d06fadeaf32       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   e55d77e3fa135       kube-controller-manager-multinode-054207
	9056b25049142       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   b93ea41ceb702       kube-apiserver-multinode-054207
	53fbef6dce414       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   a9e9574d9051a       kube-scheduler-multinode-054207
	68652278af5a6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   843632dc6a8f9       etcd-multinode-054207
	
	* 
	* ==> coredns [dd80a086ab49ba80cc221393fa84141576488bcd9a64c5c377cc30738e43ba3c] <==
	* [INFO] 10.244.0.3:34949 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129235s
	[INFO] 10.244.1.2:34285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137625s
	[INFO] 10.244.1.2:45180 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001653861s
	[INFO] 10.244.1.2:37090 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145274s
	[INFO] 10.244.1.2:50444 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084059s
	[INFO] 10.244.1.2:36593 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001230613s
	[INFO] 10.244.1.2:54129 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090971s
	[INFO] 10.244.1.2:42842 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099739s
	[INFO] 10.244.1.2:48428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194825s
	[INFO] 10.244.0.3:57129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001697s
	[INFO] 10.244.0.3:47641 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113489s
	[INFO] 10.244.0.3:34000 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073288s
	[INFO] 10.244.0.3:59449 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044846s
	[INFO] 10.244.1.2:40551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150659s
	[INFO] 10.244.1.2:42346 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198888s
	[INFO] 10.244.1.2:35495 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000237886s
	[INFO] 10.244.1.2:60962 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133012s
	[INFO] 10.244.0.3:33858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116171s
	[INFO] 10.244.0.3:35624 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000202247s
	[INFO] 10.244.0.3:57415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102889s
	[INFO] 10.244.0.3:33257 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103258s
	[INFO] 10.244.1.2:53177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124101s
	[INFO] 10.244.1.2:39545 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104028s
	[INFO] 10.244.1.2:47733 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095398s
	[INFO] 10.244.1.2:33020 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142117s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-054207
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-054207
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-054207
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_25_11_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:25:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-054207
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:26:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:25:28 +0000   Tue, 12 Dec 2023 22:25:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:25:28 +0000   Tue, 12 Dec 2023 22:25:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:25:28 +0000   Tue, 12 Dec 2023 22:25:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:25:28 +0000   Tue, 12 Dec 2023 22:25:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    multinode-054207
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6950ea5719804682b508b18d9ee9af78
	  System UUID:                6950ea57-1980-4682-b508-b18d9ee9af78
	  Boot ID:                    4615f994-cc45-4fad-9b8a-736408669b7f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-7fg9p                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-rj4p4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 etcd-multinode-054207                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kindnet-nj2sh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      59s
	  kube-system                 kube-apiserver-multinode-054207             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-multinode-054207    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-rnx8m                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-multinode-054207             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node multinode-054207 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node multinode-054207 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)  kubelet          Node multinode-054207 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s                kubelet          Node multinode-054207 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s                kubelet          Node multinode-054207 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s                kubelet          Node multinode-054207 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           59s                node-controller  Node multinode-054207 event: Registered Node multinode-054207 in Controller
	  Normal  NodeReady                53s                kubelet          Node multinode-054207 status is now: NodeReady
	
	
	Name:               multinode-054207-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-054207-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-054207
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T22_26_04_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:26:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-054207-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:26:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:26:11 +0000   Tue, 12 Dec 2023 22:26:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:26:11 +0000   Tue, 12 Dec 2023 22:26:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:26:11 +0000   Tue, 12 Dec 2023 22:26:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:26:11 +0000   Tue, 12 Dec 2023 22:26:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    multinode-054207-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b9d7f94f20c44c4a4ef541de7398d6d
	  System UUID:                6b9d7f94-f20c-44c4-a4ef-541de7398d6d
	  Boot ID:                    c3f7ddfc-243c-448e-981f-b2e9326e1a38
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-trmtr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-gh2q6               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18s
	  kube-system                 kube-proxy-jtfmt            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  18s (x5 over 19s)  kubelet          Node multinode-054207-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x5 over 19s)  kubelet          Node multinode-054207-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x5 over 19s)  kubelet          Node multinode-054207-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node multinode-054207-m02 event: Registered Node multinode-054207-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-054207-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Dec12 22:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068158] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.383089] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.479268] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139599] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.018644] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.408049] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.104594] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.140190] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.118570] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.227366] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[Dec12 22:25] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[  +9.281874] systemd-fstab-generator[1263]: Ignoring "noauto" for root device
	[ +20.550538] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [68652278af5a6b78eecc146a3554b77daf6c89d9016725cfa57a92dbaae46cc4] <==
	* {"level":"info","ts":"2023-12-12T22:25:04.399531Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","added-peer-id":"bbf1bb039b0d3451","added-peer-peer-urls":["https://192.168.39.172:2380"]}
	{"level":"info","ts":"2023-12-12T22:25:04.399691Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"bbf1bb039b0d3451","initial-advertise-peer-urls":["https://192.168.39.172:2380"],"listen-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.172:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T22:25:04.399737Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T22:25:05.129437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T22:25:05.129537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T22:25:05.129571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgPreVoteResp from bbf1bb039b0d3451 at term 1"}
	{"level":"info","ts":"2023-12-12T22:25:05.129601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T22:25:05.129624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgVoteResp from bbf1bb039b0d3451 at term 2"}
	{"level":"info","ts":"2023-12-12T22:25:05.129665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T22:25:05.129693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbf1bb039b0d3451 elected leader bbf1bb039b0d3451 at term 2"}
	{"level":"info","ts":"2023-12-12T22:25:05.134582Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"bbf1bb039b0d3451","local-member-attributes":"{Name:multinode-054207 ClientURLs:[https://192.168.39.172:2379]}","request-path":"/0/members/bbf1bb039b0d3451/attributes","cluster-id":"a5f5c7bb54d744d4","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T22:25:05.134769Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T22:25:05.135484Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:25:05.144683Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T22:25:05.14477Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T22:25:05.145429Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T22:25:05.145539Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T22:25:05.146355Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.172:2379"}
	{"level":"info","ts":"2023-12-12T22:25:05.147437Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:25:05.14759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:25:05.147614Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:26:07.888914Z","caller":"traceutil/trace.go:171","msg":"trace[946742630] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"100.850147ms","start":"2023-12-12T22:26:07.788018Z","end":"2023-12-12T22:26:07.888869Z","steps":["trace[946742630] 'process raft request'  (duration: 100.79742ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:26:07.889609Z","caller":"traceutil/trace.go:171","msg":"trace[1173778966] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"135.520471ms","start":"2023-12-12T22:26:07.75352Z","end":"2023-12-12T22:26:07.889041Z","steps":["trace[1173778966] 'process raft request'  (duration: 134.163897ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:26:09.826253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.395311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:26:09.826469Z","caller":"traceutil/trace.go:171","msg":"trace[1326838939] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:516; }","duration":"176.679769ms","start":"2023-12-12T22:26:09.649771Z","end":"2023-12-12T22:26:09.82645Z","steps":["trace[1326838939] 'range keys from in-memory index tree'  (duration: 176.320168ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  22:26:21 up 1 min,  0 users,  load average: 0.52, 0.24, 0.09
	Linux multinode-054207 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [bfb1185cc3b9e9c646ec7adc7e79054bf0dd765215e7b1b08d7c7246c498a6f4] <==
	* I1212 22:25:27.867099       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 22:25:27.867259       1 main.go:107] hostIP = 192.168.39.172
	podIP = 192.168.39.172
	I1212 22:25:27.867559       1 main.go:116] setting mtu 1500 for CNI 
	I1212 22:25:27.867606       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 22:25:27.867641       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 22:25:28.467454       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:25:28.467509       1 main.go:227] handling current node
	I1212 22:25:38.481993       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:25:38.482046       1 main.go:227] handling current node
	I1212 22:25:48.490966       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:25:48.491019       1 main.go:227] handling current node
	I1212 22:25:58.496129       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:25:58.496190       1 main.go:227] handling current node
	I1212 22:26:08.518216       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:26:08.518448       1 main.go:227] handling current node
	I1212 22:26:08.518495       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I1212 22:26:08.518527       1 main.go:250] Node multinode-054207-m02 has CIDR [10.244.1.0/24] 
	I1212 22:26:08.519048       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.15 Flags: [] Table: 0} 
	I1212 22:26:18.530923       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:26:18.530977       1 main.go:227] handling current node
	I1212 22:26:18.530992       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I1212 22:26:18.530998       1 main.go:250] Node multinode-054207-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [9056b250491421594025fa51eddfe421e34e9e41afd867cfc99df08f568a0639] <==
	* I1212 22:25:07.084246       1 cache.go:39] Caches are synced for autoregister controller
	I1212 22:25:07.118449       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 22:25:07.118524       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 22:25:07.118560       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 22:25:07.118807       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 22:25:07.120355       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 22:25:07.120615       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 22:25:07.130387       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 22:25:07.156986       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 22:25:07.182217       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 22:25:07.973645       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 22:25:07.979107       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 22:25:07.979175       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 22:25:08.635664       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 22:25:08.704119       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 22:25:08.793561       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 22:25:08.800574       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.172]
	I1212 22:25:08.801504       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 22:25:08.805897       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 22:25:09.060252       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 22:25:10.390094       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 22:25:10.418598       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 22:25:10.434944       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 22:25:22.697542       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1212 22:25:22.746925       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [39d06fadeaf3283c3da77e1d88ca69dac8f2a35f1c02b7b23646240127908db5] <==
	* I1212 22:25:28.639775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="491.915µs"
	I1212 22:25:28.671927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="180.597µs"
	I1212 22:25:30.719781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="127.342µs"
	I1212 22:25:30.768585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.707484ms"
	I1212 22:25:30.770369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.854µs"
	I1212 22:25:32.732642       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 22:26:03.807087       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-054207-m02\" does not exist"
	I1212 22:26:03.820994       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-054207-m02" podCIDRs=["10.244.1.0/24"]
	I1212 22:26:03.833146       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jtfmt"
	I1212 22:26:03.847951       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gh2q6"
	I1212 22:26:07.739267       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-054207-m02"
	I1212 22:26:07.739475       1 event.go:307] "Event occurred" object="multinode-054207-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-054207-m02 event: Registered Node multinode-054207-m02 in Controller"
	I1212 22:26:12.007891       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-054207-m02"
	I1212 22:26:14.620809       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 22:26:14.637220       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-trmtr"
	I1212 22:26:14.665097       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-7fg9p"
	I1212 22:26:14.684870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.037865ms"
	I1212 22:26:14.692572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.535247ms"
	I1212 22:26:14.693611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.066µs"
	I1212 22:26:14.699682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="106.93µs"
	I1212 22:26:14.718522       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.411µs"
	I1212 22:26:16.901245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.365475ms"
	I1212 22:26:16.901472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.519µs"
	I1212 22:26:17.567622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.706834ms"
	I1212 22:26:17.567835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="112.65µs"
	
	* 
	* ==> kube-proxy [f58bda8451bd675b89b08c73f680476cd5ebf5df9d42dd66bb331c686bf00abd] <==
	* I1212 22:25:25.161622       1 server_others.go:69] "Using iptables proxy"
	I1212 22:25:25.185274       1 node.go:141] Successfully retrieved node IP: 192.168.39.172
	I1212 22:25:25.234102       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 22:25:25.234173       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 22:25:25.236987       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:25:25.237068       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:25:25.237368       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:25:25.237416       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:25:25.238974       1 config.go:188] "Starting service config controller"
	I1212 22:25:25.239027       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:25:25.239059       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:25:25.239074       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:25:25.239758       1 config.go:315] "Starting node config controller"
	I1212 22:25:25.239796       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:25:25.339529       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 22:25:25.339620       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:25:25.340025       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [53fbef6dce41459895c32f16c9559705143286340f2be9f88494e78aa0a8eeff] <==
	* W1212 22:25:07.120572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:25:07.120582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 22:25:07.120616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:25:07.120623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 22:25:07.120693       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 22:25:07.120702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 22:25:07.122545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:25:07.122589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 22:25:07.987212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 22:25:07.987505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 22:25:08.033253       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:25:08.033584       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 22:25:08.034931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 22:25:08.034975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 22:25:08.045421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:25:08.045503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 22:25:08.125276       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 22:25:08.125469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 22:25:08.145854       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:25:08.145930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 22:25:08.320933       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:25:08.321031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:25:08.368496       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 22:25:08.368586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1212 22:25:10.308034       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 22:24:37 UTC, ends at Tue 2023-12-12 22:26:21 UTC. --
	Dec 12 22:25:22 multinode-054207 kubelet[1270]: I1212 22:25:22.807024    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8875d71-d50e-44f1-92c1-db1858b4b3bb-kube-proxy\") pod \"kube-proxy-rnx8m\" (UID: \"e8875d71-d50e-44f1-92c1-db1858b4b3bb\") " pod="kube-system/kube-proxy-rnx8m"
	Dec 12 22:25:22 multinode-054207 kubelet[1270]: I1212 22:25:22.807045    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8875d71-d50e-44f1-92c1-db1858b4b3bb-xtables-lock\") pod \"kube-proxy-rnx8m\" (UID: \"e8875d71-d50e-44f1-92c1-db1858b4b3bb\") " pod="kube-system/kube-proxy-rnx8m"
	Dec 12 22:25:22 multinode-054207 kubelet[1270]: I1212 22:25:22.907632    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/947b4acb-082a-436b-b68f-d253f391ee24-xtables-lock\") pod \"kindnet-nj2sh\" (UID: \"947b4acb-082a-436b-b68f-d253f391ee24\") " pod="kube-system/kindnet-nj2sh"
	Dec 12 22:25:22 multinode-054207 kubelet[1270]: I1212 22:25:22.907700    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/947b4acb-082a-436b-b68f-d253f391ee24-cni-cfg\") pod \"kindnet-nj2sh\" (UID: \"947b4acb-082a-436b-b68f-d253f391ee24\") " pod="kube-system/kindnet-nj2sh"
	Dec 12 22:25:22 multinode-054207 kubelet[1270]: I1212 22:25:22.907721    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/947b4acb-082a-436b-b68f-d253f391ee24-lib-modules\") pod \"kindnet-nj2sh\" (UID: \"947b4acb-082a-436b-b68f-d253f391ee24\") " pod="kube-system/kindnet-nj2sh"
	Dec 12 22:25:22 multinode-054207 kubelet[1270]: I1212 22:25:22.907747    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7m7\" (UniqueName: \"kubernetes.io/projected/947b4acb-082a-436b-b68f-d253f391ee24-kube-api-access-kh7m7\") pod \"kindnet-nj2sh\" (UID: \"947b4acb-082a-436b-b68f-d253f391ee24\") " pod="kube-system/kindnet-nj2sh"
	Dec 12 22:25:23 multinode-054207 kubelet[1270]: E1212 22:25:23.909230    1270 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 12 22:25:23 multinode-054207 kubelet[1270]: E1212 22:25:23.909477    1270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8875d71-d50e-44f1-92c1-db1858b4b3bb-kube-proxy podName:e8875d71-d50e-44f1-92c1-db1858b4b3bb nodeName:}" failed. No retries permitted until 2023-12-12 22:25:24.409391511 +0000 UTC m=+14.048900731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e8875d71-d50e-44f1-92c1-db1858b4b3bb-kube-proxy") pod "kube-proxy-rnx8m" (UID: "e8875d71-d50e-44f1-92c1-db1858b4b3bb") : failed to sync configmap cache: timed out waiting for the condition
	Dec 12 22:25:27 multinode-054207 kubelet[1270]: I1212 22:25:27.699603    1270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rnx8m" podStartSLOduration=5.6995589970000005 podCreationTimestamp="2023-12-12 22:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:25:25.697088408 +0000 UTC m=+15.336597634" watchObservedRunningTime="2023-12-12 22:25:27.699558997 +0000 UTC m=+17.339068223"
	Dec 12 22:25:28 multinode-054207 kubelet[1270]: I1212 22:25:28.598155    1270 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 22:25:28 multinode-054207 kubelet[1270]: I1212 22:25:28.638678    1270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-nj2sh" podStartSLOduration=6.638642144 podCreationTimestamp="2023-12-12 22:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:25:27.699967601 +0000 UTC m=+17.339476827" watchObservedRunningTime="2023-12-12 22:25:28.638642144 +0000 UTC m=+18.278151369"
	Dec 12 22:25:28 multinode-054207 kubelet[1270]: I1212 22:25:28.638840    1270 topology_manager.go:215] "Topology Admit Handler" podUID="8bd5cacb-68c8-41e5-a91e-07e6a9739897" podNamespace="kube-system" podName="coredns-5dd5756b68-rj4p4"
	Dec 12 22:25:28 multinode-054207 kubelet[1270]: I1212 22:25:28.642830    1270 topology_manager.go:215] "Topology Admit Handler" podUID="40d577b4-8d36-4f55-946d-92755b1d6998" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 22:25:28 multinode-054207 kubelet[1270]: I1212 22:25:28.655415    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bd5cacb-68c8-41e5-a91e-07e6a9739897-config-volume\") pod \"coredns-5dd5756b68-rj4p4\" (UID: \"8bd5cacb-68c8-41e5-a91e-07e6a9739897\") " pod="kube-system/coredns-5dd5756b68-rj4p4"
	Dec 12 22:25:28 multinode-054207 kubelet[1270]: I1212 22:25:28.655450    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bd9\" (UniqueName: \"kubernetes.io/projected/8bd5cacb-68c8-41e5-a91e-07e6a9739897-kube-api-access-l8bd9\") pod \"coredns-5dd5756b68-rj4p4\" (UID: \"8bd5cacb-68c8-41e5-a91e-07e6a9739897\") " pod="kube-system/coredns-5dd5756b68-rj4p4"
	Dec 12 22:25:28 multinode-054207 kubelet[1270]: I1212 22:25:28.756593    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/40d577b4-8d36-4f55-946d-92755b1d6998-tmp\") pod \"storage-provisioner\" (UID: \"40d577b4-8d36-4f55-946d-92755b1d6998\") " pod="kube-system/storage-provisioner"
	Dec 12 22:25:28 multinode-054207 kubelet[1270]: I1212 22:25:28.756666    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grzsg\" (UniqueName: \"kubernetes.io/projected/40d577b4-8d36-4f55-946d-92755b1d6998-kube-api-access-grzsg\") pod \"storage-provisioner\" (UID: \"40d577b4-8d36-4f55-946d-92755b1d6998\") " pod="kube-system/storage-provisioner"
	Dec 12 22:25:30 multinode-054207 kubelet[1270]: I1212 22:25:30.568890    1270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.568848373 podCreationTimestamp="2023-12-12 22:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:25:29.72746759 +0000 UTC m=+19.366976815" watchObservedRunningTime="2023-12-12 22:25:30.568848373 +0000 UTC m=+20.208357599"
	Dec 12 22:25:30 multinode-054207 kubelet[1270]: I1212 22:25:30.749587    1270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rj4p4" podStartSLOduration=8.749543582 podCreationTimestamp="2023-12-12 22:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:25:30.719428765 +0000 UTC m=+20.358937993" watchObservedRunningTime="2023-12-12 22:25:30.749543582 +0000 UTC m=+20.389052800"
	Dec 12 22:26:10 multinode-054207 kubelet[1270]: E1212 22:26:10.579455    1270 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 22:26:10 multinode-054207 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 22:26:10 multinode-054207 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 22:26:10 multinode-054207 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 22:26:14 multinode-054207 kubelet[1270]: I1212 22:26:14.676082    1270 topology_manager.go:215] "Topology Admit Handler" podUID="220bf84f-c796-488d-8673-554f240fda87" podNamespace="default" podName="busybox-5bc68d56bd-7fg9p"
	Dec 12 22:26:14 multinode-054207 kubelet[1270]: I1212 22:26:14.723566    1270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfmd\" (UniqueName: \"kubernetes.io/projected/220bf84f-c796-488d-8673-554f240fda87-kube-api-access-8tfmd\") pod \"busybox-5bc68d56bd-7fg9p\" (UID: \"220bf84f-c796-488d-8673-554f240fda87\") " pod="default/busybox-5bc68d56bd-7fg9p"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-054207 -n multinode-054207
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-054207 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (687.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054207
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-054207
E1212 22:29:17.803917   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-054207: exit status 82 (2m1.048165738s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-054207"  ...
	* Stopping node "multinode-054207"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-054207" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054207 --wait=true -v=8 --alsologtostderr
E1212 22:30:25.171635   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:31:39.569076   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:31:48.216526   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:34:17.803791   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:35:25.172024   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:35:40.849960   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:36:39.570140   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:38:02.617819   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054207 --wait=true -v=8 --alsologtostderr: (9m23.664186054s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054207
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-054207 -n multinode-054207
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-054207 logs -n 25: (1.618064265s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-054207 ssh -n                                                                 | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-054207 cp multinode-054207-m02:/home/docker/cp-test.txt                       | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile707856264/001/cp-test_multinode-054207-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n                                                                 | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-054207 cp multinode-054207-m02:/home/docker/cp-test.txt                       | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207:/home/docker/cp-test_multinode-054207-m02_multinode-054207.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n                                                                 | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n multinode-054207 sudo cat                                       | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | /home/docker/cp-test_multinode-054207-m02_multinode-054207.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-054207 cp multinode-054207-m02:/home/docker/cp-test.txt                       | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m03:/home/docker/cp-test_multinode-054207-m02_multinode-054207-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n                                                                 | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n multinode-054207-m03 sudo cat                                   | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | /home/docker/cp-test_multinode-054207-m02_multinode-054207-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-054207 cp testdata/cp-test.txt                                                | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n                                                                 | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-054207 cp multinode-054207-m03:/home/docker/cp-test.txt                       | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile707856264/001/cp-test_multinode-054207-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n                                                                 | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-054207 cp multinode-054207-m03:/home/docker/cp-test.txt                       | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207:/home/docker/cp-test_multinode-054207-m03_multinode-054207.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n                                                                 | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n multinode-054207 sudo cat                                       | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | /home/docker/cp-test_multinode-054207-m03_multinode-054207.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-054207 cp multinode-054207-m03:/home/docker/cp-test.txt                       | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m02:/home/docker/cp-test_multinode-054207-m03_multinode-054207-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n                                                                 | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | multinode-054207-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-054207 ssh -n multinode-054207-m02 sudo cat                                   | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | /home/docker/cp-test_multinode-054207-m03_multinode-054207-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-054207 node stop m03                                                          | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	| node    | multinode-054207 node start                                                             | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC | 12 Dec 23 22:27 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-054207                                                                | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC |                     |
	| stop    | -p multinode-054207                                                                     | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:27 UTC |                     |
	| start   | -p multinode-054207                                                                     | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:29 UTC | 12 Dec 23 22:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-054207                                                                | multinode-054207 | jenkins | v1.32.0 | 12 Dec 23 22:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:29:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:29:49.290896   99930 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:29:49.291202   99930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:29:49.291218   99930 out.go:309] Setting ErrFile to fd 2...
	I1212 22:29:49.291223   99930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:29:49.291469   99930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 22:29:49.292035   99930 out.go:303] Setting JSON to false
	I1212 22:29:49.293102   99930 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":11543,"bootTime":1702408646,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:29:49.293174   99930 start.go:138] virtualization: kvm guest
	I1212 22:29:49.296406   99930 out.go:177] * [multinode-054207] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:29:49.297777   99930 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:29:49.297812   99930 notify.go:220] Checking for updates...
	I1212 22:29:49.299048   99930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:29:49.300444   99930 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:29:49.302535   99930 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:29:49.303910   99930 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:29:49.305290   99930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:29:49.307085   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:29:49.307216   99930 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:29:49.307706   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:29:49.307781   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:29:49.327119   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38081
	I1212 22:29:49.327600   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:29:49.328144   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:29:49.328164   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:29:49.328515   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:29:49.328730   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:29:49.365218   99930 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 22:29:49.366483   99930 start.go:298] selected driver: kvm2
	I1212 22:29:49.366494   99930 start.go:902] validating driver "kvm2" against &{Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:29:49.366637   99930 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:29:49.366935   99930 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:29:49.367005   99930 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:29:49.382335   99930 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:29:49.383019   99930 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:29:49.383080   99930 cni.go:84] Creating CNI manager for ""
	I1212 22:29:49.383092   99930 cni.go:136] 3 nodes found, recommending kindnet
	I1212 22:29:49.383100   99930 start_flags.go:323] config:
	{Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:29:49.383368   99930 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:29:49.385678   99930 out.go:177] * Starting control plane node multinode-054207 in cluster multinode-054207
	I1212 22:29:49.387306   99930 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:29:49.387344   99930 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:29:49.387351   99930 cache.go:56] Caching tarball of preloaded images
	I1212 22:29:49.387438   99930 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:29:49.387448   99930 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:29:49.387568   99930 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:29:49.387759   99930 start.go:365] acquiring machines lock for multinode-054207: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:29:49.387819   99930 start.go:369] acquired machines lock for "multinode-054207" in 42.132µs
	I1212 22:29:49.387832   99930 start.go:96] Skipping create...Using existing machine configuration
	I1212 22:29:49.387839   99930 fix.go:54] fixHost starting: 
	I1212 22:29:49.388085   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:29:49.388120   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:29:49.402345   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I1212 22:29:49.402789   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:29:49.403262   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:29:49.403290   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:29:49.403688   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:29:49.403891   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:29:49.404100   99930 main.go:141] libmachine: (multinode-054207) Calling .GetState
	I1212 22:29:49.405839   99930 fix.go:102] recreateIfNeeded on multinode-054207: state=Running err=<nil>
	W1212 22:29:49.405862   99930 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 22:29:49.407784   99930 out.go:177] * Updating the running kvm2 "multinode-054207" VM ...
	I1212 22:29:49.409958   99930 machine.go:88] provisioning docker machine ...
	I1212 22:29:49.409989   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:29:49.410287   99930 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:29:49.410439   99930 buildroot.go:166] provisioning hostname "multinode-054207"
	I1212 22:29:49.410460   99930 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:29:49.410606   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:29:49.413522   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:29:49.413951   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:29:49.413977   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:29:49.414106   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:29:49.414304   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:29:49.414460   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:29:49.414566   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:29:49.414697   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:29:49.415102   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:29:49.415119   99930 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-054207 && echo "multinode-054207" | sudo tee /etc/hostname
	I1212 22:30:07.971504   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:14.051551   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:17.123560   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:23.203582   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:26.275511   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:32.355549   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:35.427569   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:41.507556   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:44.579543   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:50.659504   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:53.731502   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:30:59.811537   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:02.883535   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:08.963582   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:12.035548   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:18.115588   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:21.187620   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:27.267554   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:30.339455   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:36.419528   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:39.491512   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:45.571523   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:48.643578   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:54.723623   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:31:57.795585   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:03.879535   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:06.947470   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:13.027603   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:16.099524   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:22.179579   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:25.251556   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:31.331564   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:34.403607   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:40.483604   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:43.555506   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:49.639544   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:52.707526   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:32:58.787546   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:01.859499   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:07.939527   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:11.011548   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:17.091529   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:20.163541   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:26.243615   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:29.315532   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:35.395554   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:38.467514   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:44.547523   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:47.619583   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:53.699574   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:33:56.771573   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:02.851573   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:05.923609   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:12.003568   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:15.079491   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:21.155536   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:24.227593   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:30.307524   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:33.379477   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:39.459426   99930 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1212 22:34:42.461598   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:34:42.461650   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:34:42.464064   99930 machine.go:91] provisioned docker machine in 4m53.054078098s
	I1212 22:34:42.464110   99930 fix.go:56] fixHost completed within 4m53.076271534s
	I1212 22:34:42.464116   99930 start.go:83] releasing machines lock for "multinode-054207", held for 4m53.076288311s
	W1212 22:34:42.464149   99930 start.go:694] error starting host: provision: host is not running
	W1212 22:34:42.464264   99930 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 22:34:42.464274   99930 start.go:709] Will try again in 5 seconds ...
	I1212 22:34:47.467229   99930 start.go:365] acquiring machines lock for multinode-054207: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:34:47.467366   99930 start.go:369] acquired machines lock for "multinode-054207" in 61.198µs
	I1212 22:34:47.467386   99930 start.go:96] Skipping create...Using existing machine configuration
	I1212 22:34:47.467392   99930 fix.go:54] fixHost starting: 
	I1212 22:34:47.467681   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:34:47.467702   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:34:47.483206   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I1212 22:34:47.483737   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:34:47.484273   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:34:47.484301   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:34:47.484737   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:34:47.484981   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:34:47.485141   99930 main.go:141] libmachine: (multinode-054207) Calling .GetState
	I1212 22:34:47.486916   99930 fix.go:102] recreateIfNeeded on multinode-054207: state=Stopped err=<nil>
	I1212 22:34:47.486942   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	W1212 22:34:47.487096   99930 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 22:34:47.489387   99930 out.go:177] * Restarting existing kvm2 VM for "multinode-054207" ...
	I1212 22:34:47.490787   99930 main.go:141] libmachine: (multinode-054207) Calling .Start
	I1212 22:34:47.490967   99930 main.go:141] libmachine: (multinode-054207) Ensuring networks are active...
	I1212 22:34:47.491819   99930 main.go:141] libmachine: (multinode-054207) Ensuring network default is active
	I1212 22:34:47.492122   99930 main.go:141] libmachine: (multinode-054207) Ensuring network mk-multinode-054207 is active
	I1212 22:34:47.492698   99930 main.go:141] libmachine: (multinode-054207) Getting domain xml...
	I1212 22:34:47.493417   99930 main.go:141] libmachine: (multinode-054207) Creating domain...
	I1212 22:34:48.736520   99930 main.go:141] libmachine: (multinode-054207) Waiting to get IP...
	I1212 22:34:48.737537   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:48.738114   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:48.738171   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:48.738068  100740 retry.go:31] will retry after 195.559516ms: waiting for machine to come up
	I1212 22:34:48.935617   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:48.936151   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:48.936182   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:48.936105  100740 retry.go:31] will retry after 241.462855ms: waiting for machine to come up
	I1212 22:34:49.179758   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:49.180205   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:49.180231   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:49.180145  100740 retry.go:31] will retry after 296.951392ms: waiting for machine to come up
	I1212 22:34:49.478550   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:49.479029   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:49.479060   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:49.478969  100740 retry.go:31] will retry after 496.865992ms: waiting for machine to come up
	I1212 22:34:49.977715   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:49.978205   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:49.978253   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:49.978165  100740 retry.go:31] will retry after 574.506117ms: waiting for machine to come up
	I1212 22:34:50.554654   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:50.555113   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:50.555161   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:50.555080  100740 retry.go:31] will retry after 782.857214ms: waiting for machine to come up
	I1212 22:34:51.340117   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:51.340523   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:51.340554   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:51.340467  100740 retry.go:31] will retry after 999.764856ms: waiting for machine to come up
	I1212 22:34:52.341568   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:52.342029   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:52.342077   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:52.341985  100740 retry.go:31] will retry after 1.059134966s: waiting for machine to come up
	I1212 22:34:53.402435   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:53.402947   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:53.402981   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:53.402874  100740 retry.go:31] will retry after 1.467116673s: waiting for machine to come up
	I1212 22:34:54.872693   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:54.873282   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:54.873315   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:54.873218  100740 retry.go:31] will retry after 2.105996025s: waiting for machine to come up
	I1212 22:34:56.980647   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:56.981159   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:56.981212   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:56.981039  100740 retry.go:31] will retry after 2.853243567s: waiting for machine to come up
	I1212 22:34:59.836308   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:34:59.836762   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:34:59.836795   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:34:59.836719  100740 retry.go:31] will retry after 2.252376402s: waiting for machine to come up
	I1212 22:35:02.092058   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:02.092426   99930 main.go:141] libmachine: (multinode-054207) DBG | unable to find current IP address of domain multinode-054207 in network mk-multinode-054207
	I1212 22:35:02.092451   99930 main.go:141] libmachine: (multinode-054207) DBG | I1212 22:35:02.092378  100740 retry.go:31] will retry after 3.723723109s: waiting for machine to come up
	I1212 22:35:05.819579   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:05.820099   99930 main.go:141] libmachine: (multinode-054207) Found IP for machine: 192.168.39.172
	I1212 22:35:05.820128   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has current primary IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:05.820138   99930 main.go:141] libmachine: (multinode-054207) Reserving static IP address...
	I1212 22:35:05.820566   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "multinode-054207", mac: "52:54:00:7d:bc:d2", ip: "192.168.39.172"} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:05.820599   99930 main.go:141] libmachine: (multinode-054207) Reserved static IP address: 192.168.39.172
	I1212 22:35:05.820623   99930 main.go:141] libmachine: (multinode-054207) DBG | skip adding static IP to network mk-multinode-054207 - found existing host DHCP lease matching {name: "multinode-054207", mac: "52:54:00:7d:bc:d2", ip: "192.168.39.172"}
	I1212 22:35:05.820639   99930 main.go:141] libmachine: (multinode-054207) DBG | Getting to WaitForSSH function...
	I1212 22:35:05.820656   99930 main.go:141] libmachine: (multinode-054207) Waiting for SSH to be available...
	I1212 22:35:05.822621   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:05.822920   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:05.822953   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:05.823060   99930 main.go:141] libmachine: (multinode-054207) DBG | Using SSH client type: external
	I1212 22:35:05.823091   99930 main.go:141] libmachine: (multinode-054207) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa (-rw-------)
	I1212 22:35:05.823131   99930 main.go:141] libmachine: (multinode-054207) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 22:35:05.823151   99930 main.go:141] libmachine: (multinode-054207) DBG | About to run SSH command:
	I1212 22:35:05.823161   99930 main.go:141] libmachine: (multinode-054207) DBG | exit 0
	I1212 22:35:05.911350   99930 main.go:141] libmachine: (multinode-054207) DBG | SSH cmd err, output: <nil>: 
	I1212 22:35:05.911752   99930 main.go:141] libmachine: (multinode-054207) Calling .GetConfigRaw
	I1212 22:35:05.912474   99930 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:35:05.914793   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:05.915177   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:05.915211   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:05.915487   99930 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:35:05.915731   99930 machine.go:88] provisioning docker machine ...
	I1212 22:35:05.915755   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:35:05.915975   99930 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:35:05.916143   99930 buildroot.go:166] provisioning hostname "multinode-054207"
	I1212 22:35:05.916164   99930 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:35:05.916327   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:05.918681   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:05.919013   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:05.919035   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:05.919206   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:35:05.919393   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:05.919561   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:05.919737   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:35:05.919920   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:35:05.920343   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:35:05.920362   99930 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-054207 && echo "multinode-054207" | sudo tee /etc/hostname
	I1212 22:35:06.047533   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-054207
	
	I1212 22:35:06.047568   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:06.050533   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.050972   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:06.051005   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.051189   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:35:06.051407   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:06.051566   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:06.051707   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:35:06.051872   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:35:06.052277   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:35:06.052302   99930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-054207' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-054207/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-054207' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:35:06.175734   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:35:06.175798   99930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 22:35:06.175844   99930 buildroot.go:174] setting up certificates
	I1212 22:35:06.175896   99930 provision.go:83] configureAuth start
	I1212 22:35:06.175917   99930 main.go:141] libmachine: (multinode-054207) Calling .GetMachineName
	I1212 22:35:06.176217   99930 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:35:06.178641   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.179038   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:06.179074   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.179211   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:06.181671   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.182052   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:06.182088   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.182200   99930 provision.go:138] copyHostCerts
	I1212 22:35:06.182237   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:35:06.182315   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 22:35:06.182344   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:35:06.182432   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 22:35:06.182596   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:35:06.182635   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 22:35:06.182648   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:35:06.182700   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 22:35:06.182773   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:35:06.182796   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 22:35:06.182804   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:35:06.182830   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 22:35:06.182886   99930 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.multinode-054207 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube multinode-054207]
	I1212 22:35:06.417607   99930 provision.go:172] copyRemoteCerts
	I1212 22:35:06.417704   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:35:06.417730   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:06.420409   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.420682   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:06.420716   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.420896   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:35:06.421104   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:06.421284   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:35:06.421425   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:35:06.510860   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:35:06.510955   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:35:06.535991   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:35:06.536071   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 22:35:06.558124   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:35:06.558196   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:35:06.581084   99930 provision.go:86] duration metric: configureAuth took 405.165793ms
	I1212 22:35:06.581122   99930 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:35:06.581392   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:35:06.581470   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:06.584527   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.584995   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:06.585027   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.585221   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:35:06.585459   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:06.585634   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:06.585769   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:35:06.585938   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:35:06.586288   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:35:06.586307   99930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:35:06.898814   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:35:06.898845   99930 machine.go:91] provisioned docker machine in 983.098212ms
	I1212 22:35:06.898859   99930 start.go:300] post-start starting for "multinode-054207" (driver="kvm2")
	I1212 22:35:06.898893   99930 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:35:06.898912   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:35:06.899297   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:35:06.899329   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:06.902020   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.902352   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:06.902379   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:06.902502   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:35:06.902708   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:06.902895   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:35:06.903056   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:35:06.993709   99930 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:35:06.997927   99930 command_runner.go:130] > NAME=Buildroot
	I1212 22:35:06.997960   99930 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 22:35:06.997968   99930 command_runner.go:130] > ID=buildroot
	I1212 22:35:06.997976   99930 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 22:35:06.997984   99930 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 22:35:06.998037   99930 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:35:06.998054   99930 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 22:35:06.998126   99930 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 22:35:06.998195   99930 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 22:35:06.998205   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /etc/ssl/certs/838252.pem
	I1212 22:35:06.998287   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:35:07.007495   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:35:07.029988   99930 start.go:303] post-start completed in 131.109381ms
	I1212 22:35:07.030026   99930 fix.go:56] fixHost completed within 19.562632987s
	I1212 22:35:07.030083   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:07.032673   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:07.033091   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:07.033134   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:07.033275   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:35:07.033492   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:07.033707   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:07.033862   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:35:07.034018   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:35:07.034361   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1212 22:35:07.034376   99930 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:35:07.152254   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702420507.101928382
	
	I1212 22:35:07.152281   99930 fix.go:206] guest clock: 1702420507.101928382
	I1212 22:35:07.152291   99930 fix.go:219] Guest: 2023-12-12 22:35:07.101928382 +0000 UTC Remote: 2023-12-12 22:35:07.030031613 +0000 UTC m=+317.790267945 (delta=71.896769ms)
	I1212 22:35:07.152323   99930 fix.go:190] guest clock delta is within tolerance: 71.896769ms
	I1212 22:35:07.152329   99930 start.go:83] releasing machines lock for "multinode-054207", held for 19.684954747s
	I1212 22:35:07.152351   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:35:07.152682   99930 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:35:07.155549   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:07.155911   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:07.155936   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:07.156110   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:35:07.156666   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:35:07.156841   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:35:07.156927   99930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:35:07.156970   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:07.157080   99930 ssh_runner.go:195] Run: cat /version.json
	I1212 22:35:07.157124   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:35:07.159513   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:07.159639   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:07.159961   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:07.159992   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:07.160021   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:07.160037   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:07.160055   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:35:07.160264   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:35:07.160268   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:07.160453   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:35:07.160455   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:35:07.160634   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:35:07.160655   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:35:07.160808   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:35:07.244079   99930 command_runner.go:130] > {"iso_version": "v1.32.1-1702394653-17761", "kicbase_version": "v0.0.42-1702334074-17764", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 22:35:07.244532   99930 ssh_runner.go:195] Run: systemctl --version
	I1212 22:35:07.268210   99930 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 22:35:07.268352   99930 command_runner.go:130] > systemd 247 (247)
	I1212 22:35:07.268382   99930 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 22:35:07.268454   99930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:35:07.409553   99930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:35:07.416354   99930 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 22:35:07.416406   99930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:35:07.416475   99930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:35:07.431179   99930 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 22:35:07.431485   99930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:35:07.431506   99930 start.go:475] detecting cgroup driver to use...
	I1212 22:35:07.431629   99930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:35:07.448860   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:35:07.461486   99930 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:35:07.461572   99930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:35:07.474047   99930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:35:07.487026   99930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:35:07.501202   99930 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1212 22:35:07.590090   99930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:35:07.711222   99930 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 22:35:07.711292   99930 docker.go:219] disabling docker service ...
	I1212 22:35:07.711360   99930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:35:07.725471   99930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:35:07.740059   99930 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1212 22:35:07.740164   99930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:35:07.852306   99930 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 22:35:07.852405   99930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:35:07.963780   99930 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1212 22:35:07.963833   99930 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 22:35:07.963919   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:35:07.977649   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:35:07.995145   99930 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 22:35:07.995229   99930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:35:07.995293   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:35:08.005763   99930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:35:08.005840   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:35:08.016273   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:35:08.026489   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:35:08.036371   99930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:35:08.046422   99930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:35:08.057030   99930 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:35:08.057076   99930 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 22:35:08.057126   99930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 22:35:08.070647   99930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:35:08.080078   99930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:35:08.178839   99930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:35:08.344286   99930 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:35:08.344360   99930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:35:08.349540   99930 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 22:35:08.349560   99930 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 22:35:08.349567   99930 command_runner.go:130] > Device: 16h/22d	Inode: 749         Links: 1
	I1212 22:35:08.349574   99930 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:35:08.349579   99930 command_runner.go:130] > Access: 2023-12-12 22:35:08.274811046 +0000
	I1212 22:35:08.349589   99930 command_runner.go:130] > Modify: 2023-12-12 22:35:08.274811046 +0000
	I1212 22:35:08.349594   99930 command_runner.go:130] > Change: 2023-12-12 22:35:08.274811046 +0000
	I1212 22:35:08.349598   99930 command_runner.go:130] >  Birth: -
	I1212 22:35:08.349721   99930 start.go:543] Will wait 60s for crictl version
	I1212 22:35:08.349785   99930 ssh_runner.go:195] Run: which crictl
	I1212 22:35:08.353725   99930 command_runner.go:130] > /usr/bin/crictl
	I1212 22:35:08.353796   99930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:35:08.402114   99930 command_runner.go:130] > Version:  0.1.0
	I1212 22:35:08.402143   99930 command_runner.go:130] > RuntimeName:  cri-o
	I1212 22:35:08.402148   99930 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 22:35:08.402153   99930 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 22:35:08.402171   99930 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 22:35:08.402243   99930 ssh_runner.go:195] Run: crio --version
	I1212 22:35:08.448400   99930 command_runner.go:130] > crio version 1.24.1
	I1212 22:35:08.448430   99930 command_runner.go:130] > Version:          1.24.1
	I1212 22:35:08.448440   99930 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:35:08.448447   99930 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:35:08.448456   99930 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:35:08.448464   99930 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:35:08.448470   99930 command_runner.go:130] > Compiler:         gc
	I1212 22:35:08.448478   99930 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:35:08.448487   99930 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:35:08.448496   99930 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:35:08.448506   99930 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:35:08.448510   99930 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:35:08.449689   99930 ssh_runner.go:195] Run: crio --version
	I1212 22:35:08.500911   99930 command_runner.go:130] > crio version 1.24.1
	I1212 22:35:08.500935   99930 command_runner.go:130] > Version:          1.24.1
	I1212 22:35:08.500949   99930 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:35:08.500954   99930 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:35:08.500960   99930 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:35:08.500965   99930 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:35:08.500970   99930 command_runner.go:130] > Compiler:         gc
	I1212 22:35:08.500974   99930 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:35:08.500980   99930 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:35:08.500988   99930 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:35:08.500992   99930 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:35:08.500996   99930 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:35:08.504953   99930 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 22:35:08.506428   99930 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:35:08.509495   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:08.509957   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:35:08.509986   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:35:08.510184   99930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 22:35:08.514246   99930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:35:08.526294   99930 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:35:08.526361   99930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:35:08.561661   99930 command_runner.go:130] > {
	I1212 22:35:08.561683   99930 command_runner.go:130] >   "images": [
	I1212 22:35:08.561688   99930 command_runner.go:130] >     {
	I1212 22:35:08.561696   99930 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 22:35:08.561701   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:08.561706   99930 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 22:35:08.561709   99930 command_runner.go:130] >       ],
	I1212 22:35:08.561719   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:08.561728   99930 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 22:35:08.561754   99930 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 22:35:08.561758   99930 command_runner.go:130] >       ],
	I1212 22:35:08.561763   99930 command_runner.go:130] >       "size": "750414",
	I1212 22:35:08.561766   99930 command_runner.go:130] >       "uid": {
	I1212 22:35:08.561771   99930 command_runner.go:130] >         "value": "65535"
	I1212 22:35:08.561774   99930 command_runner.go:130] >       },
	I1212 22:35:08.561780   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:08.561788   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:08.561806   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:08.561810   99930 command_runner.go:130] >     }
	I1212 22:35:08.561814   99930 command_runner.go:130] >   ]
	I1212 22:35:08.561818   99930 command_runner.go:130] > }
	I1212 22:35:08.562983   99930 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 22:35:08.563063   99930 ssh_runner.go:195] Run: which lz4
	I1212 22:35:08.566802   99930 command_runner.go:130] > /usr/bin/lz4
	I1212 22:35:08.566831   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 22:35:08.566920   99930 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 22:35:08.570649   99930 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:35:08.570818   99930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:35:08.570844   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 22:35:10.460434   99930 crio.go:444] Took 1.893541 seconds to copy over tarball
	I1212 22:35:10.460501   99930 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 22:35:13.419922   99930 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959389296s)
	I1212 22:35:13.419979   99930 crio.go:451] Took 2.959518 seconds to extract the tarball
	I1212 22:35:13.419991   99930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 22:35:13.460445   99930 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:35:13.509936   99930 command_runner.go:130] > {
	I1212 22:35:13.509964   99930 command_runner.go:130] >   "images": [
	I1212 22:35:13.509971   99930 command_runner.go:130] >     {
	I1212 22:35:13.509984   99930 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1212 22:35:13.509996   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.510006   99930 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 22:35:13.510013   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510020   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.510034   99930 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 22:35:13.510046   99930 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1212 22:35:13.510053   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510060   99930 command_runner.go:130] >       "size": "65258016",
	I1212 22:35:13.510075   99930 command_runner.go:130] >       "uid": null,
	I1212 22:35:13.510082   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.510091   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.510102   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.510109   99930 command_runner.go:130] >     },
	I1212 22:35:13.510118   99930 command_runner.go:130] >     {
	I1212 22:35:13.510129   99930 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 22:35:13.510139   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.510149   99930 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 22:35:13.510158   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510176   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.510193   99930 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 22:35:13.510205   99930 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 22:35:13.510211   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510223   99930 command_runner.go:130] >       "size": "31470524",
	I1212 22:35:13.510229   99930 command_runner.go:130] >       "uid": null,
	I1212 22:35:13.510236   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.510242   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.510249   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.510254   99930 command_runner.go:130] >     },
	I1212 22:35:13.510261   99930 command_runner.go:130] >     {
	I1212 22:35:13.510275   99930 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1212 22:35:13.510291   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.510303   99930 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 22:35:13.510313   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510324   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.510342   99930 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1212 22:35:13.510356   99930 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1212 22:35:13.510368   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510377   99930 command_runner.go:130] >       "size": "53621675",
	I1212 22:35:13.510386   99930 command_runner.go:130] >       "uid": null,
	I1212 22:35:13.510396   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.510405   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.510414   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.510422   99930 command_runner.go:130] >     },
	I1212 22:35:13.510430   99930 command_runner.go:130] >     {
	I1212 22:35:13.510440   99930 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1212 22:35:13.510450   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.510461   99930 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 22:35:13.510469   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510479   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.510493   99930 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1212 22:35:13.510507   99930 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1212 22:35:13.510528   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510538   99930 command_runner.go:130] >       "size": "295456551",
	I1212 22:35:13.510548   99930 command_runner.go:130] >       "uid": {
	I1212 22:35:13.510561   99930 command_runner.go:130] >         "value": "0"
	I1212 22:35:13.510569   99930 command_runner.go:130] >       },
	I1212 22:35:13.510580   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.510590   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.510597   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.510605   99930 command_runner.go:130] >     },
	I1212 22:35:13.510614   99930 command_runner.go:130] >     {
	I1212 22:35:13.510628   99930 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1212 22:35:13.510637   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.510649   99930 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 22:35:13.510659   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510669   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.510685   99930 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1212 22:35:13.510701   99930 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1212 22:35:13.510710   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510719   99930 command_runner.go:130] >       "size": "127226832",
	I1212 22:35:13.510728   99930 command_runner.go:130] >       "uid": {
	I1212 22:35:13.510738   99930 command_runner.go:130] >         "value": "0"
	I1212 22:35:13.510750   99930 command_runner.go:130] >       },
	I1212 22:35:13.510761   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.510771   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.510781   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.510793   99930 command_runner.go:130] >     },
	I1212 22:35:13.510802   99930 command_runner.go:130] >     {
	I1212 22:35:13.510815   99930 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1212 22:35:13.510825   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.510837   99930 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 22:35:13.510847   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510858   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.510874   99930 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 22:35:13.510890   99930 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1212 22:35:13.510898   99930 command_runner.go:130] >       ],
	I1212 22:35:13.510908   99930 command_runner.go:130] >       "size": "123261750",
	I1212 22:35:13.510916   99930 command_runner.go:130] >       "uid": {
	I1212 22:35:13.510922   99930 command_runner.go:130] >         "value": "0"
	I1212 22:35:13.510931   99930 command_runner.go:130] >       },
	I1212 22:35:13.510979   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.511009   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.511016   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.511025   99930 command_runner.go:130] >     },
	I1212 22:35:13.511034   99930 command_runner.go:130] >     {
	I1212 22:35:13.511047   99930 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1212 22:35:13.511056   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.511068   99930 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 22:35:13.511076   99930 command_runner.go:130] >       ],
	I1212 22:35:13.511086   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.511101   99930 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1212 22:35:13.511116   99930 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 22:35:13.511125   99930 command_runner.go:130] >       ],
	I1212 22:35:13.511136   99930 command_runner.go:130] >       "size": "74749335",
	I1212 22:35:13.511145   99930 command_runner.go:130] >       "uid": null,
	I1212 22:35:13.511154   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.511164   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.511174   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.511187   99930 command_runner.go:130] >     },
	I1212 22:35:13.511197   99930 command_runner.go:130] >     {
	I1212 22:35:13.511209   99930 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1212 22:35:13.511219   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.511231   99930 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 22:35:13.511249   99930 command_runner.go:130] >       ],
	I1212 22:35:13.511260   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.511358   99930 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 22:35:13.511376   99930 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1212 22:35:13.511382   99930 command_runner.go:130] >       ],
	I1212 22:35:13.511389   99930 command_runner.go:130] >       "size": "61551410",
	I1212 22:35:13.511398   99930 command_runner.go:130] >       "uid": {
	I1212 22:35:13.511407   99930 command_runner.go:130] >         "value": "0"
	I1212 22:35:13.511417   99930 command_runner.go:130] >       },
	I1212 22:35:13.511427   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.511436   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.511447   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.511456   99930 command_runner.go:130] >     },
	I1212 22:35:13.511468   99930 command_runner.go:130] >     {
	I1212 22:35:13.511477   99930 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 22:35:13.511484   99930 command_runner.go:130] >       "repoTags": [
	I1212 22:35:13.511489   99930 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 22:35:13.511495   99930 command_runner.go:130] >       ],
	I1212 22:35:13.511499   99930 command_runner.go:130] >       "repoDigests": [
	I1212 22:35:13.511509   99930 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 22:35:13.511520   99930 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 22:35:13.511529   99930 command_runner.go:130] >       ],
	I1212 22:35:13.511538   99930 command_runner.go:130] >       "size": "750414",
	I1212 22:35:13.511548   99930 command_runner.go:130] >       "uid": {
	I1212 22:35:13.511557   99930 command_runner.go:130] >         "value": "65535"
	I1212 22:35:13.511566   99930 command_runner.go:130] >       },
	I1212 22:35:13.511576   99930 command_runner.go:130] >       "username": "",
	I1212 22:35:13.511585   99930 command_runner.go:130] >       "spec": null,
	I1212 22:35:13.511595   99930 command_runner.go:130] >       "pinned": false
	I1212 22:35:13.511603   99930 command_runner.go:130] >     }
	I1212 22:35:13.511609   99930 command_runner.go:130] >   ]
	I1212 22:35:13.511621   99930 command_runner.go:130] > }
	I1212 22:35:13.511782   99930 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:35:13.511816   99930 cache_images.go:84] Images are preloaded, skipping loading
	I1212 22:35:13.511906   99930 ssh_runner.go:195] Run: crio config
	I1212 22:35:13.564236   99930 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 22:35:13.564284   99930 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 22:35:13.564297   99930 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 22:35:13.564303   99930 command_runner.go:130] > #
	I1212 22:35:13.564314   99930 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 22:35:13.564325   99930 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 22:35:13.564335   99930 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 22:35:13.564343   99930 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 22:35:13.564348   99930 command_runner.go:130] > # reload'.
	I1212 22:35:13.564358   99930 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 22:35:13.564369   99930 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 22:35:13.564380   99930 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 22:35:13.564393   99930 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 22:35:13.564400   99930 command_runner.go:130] > [crio]
	I1212 22:35:13.564411   99930 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 22:35:13.564421   99930 command_runner.go:130] > # containers images, in this directory.
	I1212 22:35:13.564436   99930 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 22:35:13.564453   99930 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 22:35:13.564463   99930 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 22:35:13.564473   99930 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 22:35:13.564486   99930 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 22:35:13.564494   99930 command_runner.go:130] > storage_driver = "overlay"
	I1212 22:35:13.564505   99930 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 22:35:13.564512   99930 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 22:35:13.564518   99930 command_runner.go:130] > storage_option = [
	I1212 22:35:13.564527   99930 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 22:35:13.564533   99930 command_runner.go:130] > ]
	I1212 22:35:13.564549   99930 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 22:35:13.564563   99930 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 22:35:13.564571   99930 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 22:35:13.564582   99930 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 22:35:13.564592   99930 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 22:35:13.564602   99930 command_runner.go:130] > # always happen on a node reboot
	I1212 22:35:13.564611   99930 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 22:35:13.564627   99930 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 22:35:13.564636   99930 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 22:35:13.564664   99930 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 22:35:13.564711   99930 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 22:35:13.564728   99930 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 22:35:13.564741   99930 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 22:35:13.564751   99930 command_runner.go:130] > # internal_wipe = true
	I1212 22:35:13.564760   99930 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 22:35:13.564773   99930 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 22:35:13.564784   99930 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 22:35:13.564798   99930 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 22:35:13.564824   99930 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 22:35:13.564831   99930 command_runner.go:130] > [crio.api]
	I1212 22:35:13.564844   99930 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 22:35:13.564854   99930 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 22:35:13.564865   99930 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 22:35:13.564873   99930 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 22:35:13.564886   99930 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 22:35:13.564967   99930 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 22:35:13.564994   99930 command_runner.go:130] > # stream_port = "0"
	I1212 22:35:13.565003   99930 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 22:35:13.565010   99930 command_runner.go:130] > # stream_enable_tls = false
	I1212 22:35:13.565024   99930 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 22:35:13.565034   99930 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 22:35:13.565046   99930 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 22:35:13.565060   99930 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 22:35:13.565068   99930 command_runner.go:130] > # minutes.
	I1212 22:35:13.565075   99930 command_runner.go:130] > # stream_tls_cert = ""
	I1212 22:35:13.565088   99930 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 22:35:13.565100   99930 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 22:35:13.565107   99930 command_runner.go:130] > # stream_tls_key = ""
	I1212 22:35:13.565120   99930 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 22:35:13.565134   99930 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 22:35:13.565146   99930 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 22:35:13.565155   99930 command_runner.go:130] > # stream_tls_ca = ""
	I1212 22:35:13.565167   99930 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:35:13.565183   99930 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 22:35:13.565214   99930 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:35:13.565257   99930 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 22:35:13.565289   99930 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 22:35:13.565303   99930 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 22:35:13.565311   99930 command_runner.go:130] > [crio.runtime]
	I1212 22:35:13.565323   99930 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 22:35:13.565335   99930 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 22:35:13.565345   99930 command_runner.go:130] > # "nofile=1024:2048"
	I1212 22:35:13.565356   99930 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 22:35:13.565365   99930 command_runner.go:130] > # default_ulimits = [
	I1212 22:35:13.565372   99930 command_runner.go:130] > # ]
	I1212 22:35:13.565386   99930 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 22:35:13.565396   99930 command_runner.go:130] > # no_pivot = false
	I1212 22:35:13.565405   99930 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 22:35:13.565418   99930 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 22:35:13.565430   99930 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 22:35:13.565443   99930 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 22:35:13.565461   99930 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 22:35:13.565477   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:35:13.565489   99930 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 22:35:13.565498   99930 command_runner.go:130] > # Cgroup setting for conmon
	I1212 22:35:13.565510   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 22:35:13.565520   99930 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 22:35:13.565530   99930 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 22:35:13.565542   99930 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 22:35:13.565555   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:35:13.565583   99930 command_runner.go:130] > conmon_env = [
	I1212 22:35:13.565602   99930 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 22:35:13.565608   99930 command_runner.go:130] > ]
	I1212 22:35:13.565621   99930 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 22:35:13.565641   99930 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 22:35:13.565655   99930 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 22:35:13.565665   99930 command_runner.go:130] > # default_env = [
	I1212 22:35:13.565674   99930 command_runner.go:130] > # ]
	I1212 22:35:13.565684   99930 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 22:35:13.565696   99930 command_runner.go:130] > # selinux = false
	I1212 22:35:13.565707   99930 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 22:35:13.565722   99930 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 22:35:13.565734   99930 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 22:35:13.565744   99930 command_runner.go:130] > # seccomp_profile = ""
	I1212 22:35:13.565756   99930 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 22:35:13.565768   99930 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 22:35:13.565811   99930 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 22:35:13.565820   99930 command_runner.go:130] > # which might increase security.
	I1212 22:35:13.565833   99930 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 22:35:13.565845   99930 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 22:35:13.565858   99930 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 22:35:13.565871   99930 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 22:35:13.565882   99930 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 22:35:13.565897   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:35:13.565904   99930 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 22:35:13.565910   99930 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 22:35:13.565917   99930 command_runner.go:130] > # the cgroup blockio controller.
	I1212 22:35:13.565924   99930 command_runner.go:130] > # blockio_config_file = ""
	I1212 22:35:13.565932   99930 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 22:35:13.565937   99930 command_runner.go:130] > # irqbalance daemon.
	I1212 22:35:13.565944   99930 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 22:35:13.565950   99930 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 22:35:13.565958   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:35:13.565962   99930 command_runner.go:130] > # rdt_config_file = ""
	I1212 22:35:13.565969   99930 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 22:35:13.565974   99930 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 22:35:13.565979   99930 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 22:35:13.565986   99930 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 22:35:13.565992   99930 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 22:35:13.566001   99930 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 22:35:13.566005   99930 command_runner.go:130] > # will be added.
	I1212 22:35:13.566010   99930 command_runner.go:130] > # default_capabilities = [
	I1212 22:35:13.566014   99930 command_runner.go:130] > # 	"CHOWN",
	I1212 22:35:13.566019   99930 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 22:35:13.566023   99930 command_runner.go:130] > # 	"FSETID",
	I1212 22:35:13.566032   99930 command_runner.go:130] > # 	"FOWNER",
	I1212 22:35:13.566039   99930 command_runner.go:130] > # 	"SETGID",
	I1212 22:35:13.566045   99930 command_runner.go:130] > # 	"SETUID",
	I1212 22:35:13.566054   99930 command_runner.go:130] > # 	"SETPCAP",
	I1212 22:35:13.566061   99930 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 22:35:13.566071   99930 command_runner.go:130] > # 	"KILL",
	I1212 22:35:13.566078   99930 command_runner.go:130] > # ]
	I1212 22:35:13.566093   99930 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 22:35:13.566103   99930 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:35:13.566109   99930 command_runner.go:130] > # default_sysctls = [
	I1212 22:35:13.566112   99930 command_runner.go:130] > # ]
	I1212 22:35:13.566119   99930 command_runner.go:130] > # List of devices on the host that a
	I1212 22:35:13.566125   99930 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 22:35:13.566134   99930 command_runner.go:130] > # allowed_devices = [
	I1212 22:35:13.566141   99930 command_runner.go:130] > # 	"/dev/fuse",
	I1212 22:35:13.566144   99930 command_runner.go:130] > # ]
	I1212 22:35:13.566149   99930 command_runner.go:130] > # List of additional devices. specified as
	I1212 22:35:13.566159   99930 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 22:35:13.566171   99930 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 22:35:13.566207   99930 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:35:13.566214   99930 command_runner.go:130] > # additional_devices = [
	I1212 22:35:13.566217   99930 command_runner.go:130] > # ]
	I1212 22:35:13.566222   99930 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 22:35:13.566226   99930 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 22:35:13.566230   99930 command_runner.go:130] > # 	"/etc/cdi",
	I1212 22:35:13.566234   99930 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 22:35:13.566238   99930 command_runner.go:130] > # ]
	I1212 22:35:13.566244   99930 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 22:35:13.566251   99930 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 22:35:13.566255   99930 command_runner.go:130] > # Defaults to false.
	I1212 22:35:13.566260   99930 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 22:35:13.566268   99930 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 22:35:13.566274   99930 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 22:35:13.566280   99930 command_runner.go:130] > # hooks_dir = [
	I1212 22:35:13.566304   99930 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 22:35:13.566310   99930 command_runner.go:130] > # ]
	I1212 22:35:13.566318   99930 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 22:35:13.566327   99930 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 22:35:13.566333   99930 command_runner.go:130] > # its default mounts from the following two files:
	I1212 22:35:13.566338   99930 command_runner.go:130] > #
	I1212 22:35:13.566345   99930 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 22:35:13.566353   99930 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 22:35:13.566359   99930 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 22:35:13.566364   99930 command_runner.go:130] > #
	I1212 22:35:13.566370   99930 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 22:35:13.566379   99930 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 22:35:13.566385   99930 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 22:35:13.566392   99930 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 22:35:13.566395   99930 command_runner.go:130] > #
	I1212 22:35:13.566400   99930 command_runner.go:130] > # default_mounts_file = ""
	I1212 22:35:13.566408   99930 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 22:35:13.566421   99930 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 22:35:13.566431   99930 command_runner.go:130] > pids_limit = 1024
	I1212 22:35:13.566442   99930 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 22:35:13.566460   99930 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 22:35:13.566471   99930 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 22:35:13.566480   99930 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 22:35:13.566486   99930 command_runner.go:130] > # log_size_max = -1
	I1212 22:35:13.566493   99930 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 22:35:13.566500   99930 command_runner.go:130] > # log_to_journald = false
	I1212 22:35:13.566506   99930 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 22:35:13.566513   99930 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 22:35:13.566518   99930 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 22:35:13.566529   99930 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 22:35:13.566542   99930 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 22:35:13.566550   99930 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 22:35:13.566562   99930 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 22:35:13.566573   99930 command_runner.go:130] > # read_only = false
	I1212 22:35:13.566583   99930 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 22:35:13.566591   99930 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 22:35:13.566596   99930 command_runner.go:130] > # live configuration reload.
	I1212 22:35:13.566601   99930 command_runner.go:130] > # log_level = "info"
	I1212 22:35:13.566609   99930 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 22:35:13.566616   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:35:13.566623   99930 command_runner.go:130] > # log_filter = ""
	I1212 22:35:13.566636   99930 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 22:35:13.566650   99930 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 22:35:13.566661   99930 command_runner.go:130] > # separated by comma.
	I1212 22:35:13.566671   99930 command_runner.go:130] > # uid_mappings = ""
	I1212 22:35:13.566680   99930 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 22:35:13.566690   99930 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 22:35:13.566694   99930 command_runner.go:130] > # separated by comma.
	I1212 22:35:13.566704   99930 command_runner.go:130] > # gid_mappings = ""
	I1212 22:35:13.566714   99930 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 22:35:13.566728   99930 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:35:13.566738   99930 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:35:13.566746   99930 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 22:35:13.566759   99930 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 22:35:13.566772   99930 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:35:13.566785   99930 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:35:13.566798   99930 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 22:35:13.566809   99930 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 22:35:13.566823   99930 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 22:35:13.566836   99930 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 22:35:13.566846   99930 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 22:35:13.566856   99930 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 22:35:13.566868   99930 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 22:35:13.566880   99930 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 22:35:13.566896   99930 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 22:35:13.566905   99930 command_runner.go:130] > drop_infra_ctr = false
	I1212 22:35:13.566916   99930 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 22:35:13.566929   99930 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 22:35:13.566943   99930 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 22:35:13.566973   99930 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 22:35:13.566985   99930 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 22:35:13.566997   99930 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 22:35:13.567015   99930 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 22:35:13.567027   99930 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 22:35:13.567042   99930 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 22:35:13.567055   99930 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 22:35:13.567064   99930 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 22:35:13.567075   99930 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 22:35:13.567086   99930 command_runner.go:130] > # default_runtime = "runc"
	I1212 22:35:13.567099   99930 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 22:35:13.567113   99930 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 22:35:13.567131   99930 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 22:35:13.567142   99930 command_runner.go:130] > # creation as a file is not desired either.
	I1212 22:35:13.567154   99930 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 22:35:13.567167   99930 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 22:35:13.567178   99930 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 22:35:13.567190   99930 command_runner.go:130] > # ]
	I1212 22:35:13.567204   99930 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 22:35:13.567214   99930 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 22:35:13.567227   99930 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 22:35:13.567246   99930 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 22:35:13.567253   99930 command_runner.go:130] > #
	I1212 22:35:13.567271   99930 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 22:35:13.567281   99930 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 22:35:13.567291   99930 command_runner.go:130] > #  runtime_type = "oci"
	I1212 22:35:13.567302   99930 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 22:35:13.567312   99930 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 22:35:13.567322   99930 command_runner.go:130] > #  allowed_annotations = []
	I1212 22:35:13.567332   99930 command_runner.go:130] > # Where:
	I1212 22:35:13.567340   99930 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 22:35:13.567350   99930 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 22:35:13.567361   99930 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 22:35:13.567375   99930 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 22:35:13.567386   99930 command_runner.go:130] > #   in $PATH.
	I1212 22:35:13.567396   99930 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 22:35:13.567407   99930 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 22:35:13.567424   99930 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 22:35:13.567438   99930 command_runner.go:130] > #   state.
	I1212 22:35:13.567445   99930 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 22:35:13.567458   99930 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 22:35:13.567475   99930 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 22:35:13.567488   99930 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 22:35:13.567502   99930 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 22:35:13.567515   99930 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 22:35:13.567527   99930 command_runner.go:130] > #   The currently recognized values are:
	I1212 22:35:13.567541   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 22:35:13.567555   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 22:35:13.567569   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 22:35:13.567584   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 22:35:13.567596   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 22:35:13.567608   99930 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 22:35:13.567623   99930 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 22:35:13.567634   99930 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 22:35:13.567645   99930 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 22:35:13.567656   99930 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 22:35:13.567662   99930 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 22:35:13.567669   99930 command_runner.go:130] > runtime_type = "oci"
	I1212 22:35:13.567676   99930 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 22:35:13.567691   99930 command_runner.go:130] > runtime_config_path = ""
	I1212 22:35:13.567702   99930 command_runner.go:130] > monitor_path = ""
	I1212 22:35:13.567709   99930 command_runner.go:130] > monitor_cgroup = ""
	I1212 22:35:13.567719   99930 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 22:35:13.567732   99930 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 22:35:13.567743   99930 command_runner.go:130] > # running containers
	I1212 22:35:13.567750   99930 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 22:35:13.567763   99930 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 22:35:13.567814   99930 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 22:35:13.567824   99930 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 22:35:13.567828   99930 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 22:35:13.567833   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 22:35:13.567837   99930 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 22:35:13.567861   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 22:35:13.567869   99930 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 22:35:13.567873   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 22:35:13.567880   99930 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 22:35:13.567888   99930 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 22:35:13.567897   99930 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 22:35:13.567906   99930 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 22:35:13.567914   99930 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 22:35:13.567922   99930 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 22:35:13.567931   99930 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 22:35:13.567940   99930 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 22:35:13.567948   99930 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 22:35:13.567955   99930 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 22:35:13.567961   99930 command_runner.go:130] > # Example:
	I1212 22:35:13.567966   99930 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 22:35:13.567973   99930 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 22:35:13.567978   99930 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 22:35:13.567985   99930 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 22:35:13.567989   99930 command_runner.go:130] > # cpuset = 0
	I1212 22:35:13.567995   99930 command_runner.go:130] > # cpushares = "0-1"
	I1212 22:35:13.567999   99930 command_runner.go:130] > # Where:
	I1212 22:35:13.568005   99930 command_runner.go:130] > # The workload name is workload-type.
	I1212 22:35:13.568012   99930 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 22:35:13.568022   99930 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 22:35:13.568030   99930 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 22:35:13.568038   99930 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 22:35:13.568046   99930 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 22:35:13.568050   99930 command_runner.go:130] > # 
	I1212 22:35:13.568058   99930 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 22:35:13.568061   99930 command_runner.go:130] > #
	I1212 22:35:13.568067   99930 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 22:35:13.568075   99930 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 22:35:13.568081   99930 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 22:35:13.568093   99930 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 22:35:13.568102   99930 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 22:35:13.568108   99930 command_runner.go:130] > [crio.image]
	I1212 22:35:13.568114   99930 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 22:35:13.568120   99930 command_runner.go:130] > # default_transport = "docker://"
	I1212 22:35:13.568126   99930 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 22:35:13.568134   99930 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:35:13.568138   99930 command_runner.go:130] > # global_auth_file = ""
	I1212 22:35:13.568145   99930 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 22:35:13.568153   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:35:13.568158   99930 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 22:35:13.568167   99930 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 22:35:13.568172   99930 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:35:13.568180   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:35:13.568188   99930 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 22:35:13.568196   99930 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 22:35:13.568205   99930 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 22:35:13.568213   99930 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 22:35:13.568219   99930 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 22:35:13.568226   99930 command_runner.go:130] > # pause_command = "/pause"
	I1212 22:35:13.568231   99930 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 22:35:13.568240   99930 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 22:35:13.568246   99930 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 22:35:13.568254   99930 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 22:35:13.568260   99930 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 22:35:13.568264   99930 command_runner.go:130] > # signature_policy = ""
	I1212 22:35:13.568272   99930 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 22:35:13.568278   99930 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 22:35:13.568282   99930 command_runner.go:130] > # changing them here.
	I1212 22:35:13.568285   99930 command_runner.go:130] > # insecure_registries = [
	I1212 22:35:13.568289   99930 command_runner.go:130] > # ]
	I1212 22:35:13.568294   99930 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 22:35:13.568299   99930 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 22:35:13.568303   99930 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 22:35:13.568308   99930 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 22:35:13.568312   99930 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 22:35:13.568331   99930 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 22:35:13.568336   99930 command_runner.go:130] > # CNI plugins.
	I1212 22:35:13.568342   99930 command_runner.go:130] > [crio.network]
	I1212 22:35:13.568348   99930 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 22:35:13.568355   99930 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 22:35:13.568359   99930 command_runner.go:130] > # cni_default_network = ""
	I1212 22:35:13.568365   99930 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 22:35:13.568371   99930 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 22:35:13.568380   99930 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 22:35:13.568386   99930 command_runner.go:130] > # plugin_dirs = [
	I1212 22:35:13.568390   99930 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 22:35:13.568395   99930 command_runner.go:130] > # ]
	I1212 22:35:13.568400   99930 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 22:35:13.568407   99930 command_runner.go:130] > [crio.metrics]
	I1212 22:35:13.568411   99930 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 22:35:13.568416   99930 command_runner.go:130] > enable_metrics = true
	I1212 22:35:13.568421   99930 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 22:35:13.568428   99930 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 22:35:13.568434   99930 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 22:35:13.568442   99930 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 22:35:13.568448   99930 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 22:35:13.568454   99930 command_runner.go:130] > # metrics_collectors = [
	I1212 22:35:13.568458   99930 command_runner.go:130] > # 	"operations",
	I1212 22:35:13.568466   99930 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 22:35:13.568470   99930 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 22:35:13.568481   99930 command_runner.go:130] > # 	"operations_errors",
	I1212 22:35:13.568488   99930 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 22:35:13.568495   99930 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 22:35:13.568499   99930 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 22:35:13.568505   99930 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 22:35:13.568510   99930 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 22:35:13.568514   99930 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 22:35:13.568520   99930 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 22:35:13.568524   99930 command_runner.go:130] > # 	"containers_oom_total",
	I1212 22:35:13.568530   99930 command_runner.go:130] > # 	"containers_oom",
	I1212 22:35:13.568535   99930 command_runner.go:130] > # 	"processes_defunct",
	I1212 22:35:13.568539   99930 command_runner.go:130] > # 	"operations_total",
	I1212 22:35:13.568545   99930 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 22:35:13.568550   99930 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 22:35:13.568556   99930 command_runner.go:130] > # 	"operations_errors_total",
	I1212 22:35:13.568561   99930 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 22:35:13.568567   99930 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 22:35:13.568571   99930 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 22:35:13.568578   99930 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 22:35:13.568584   99930 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 22:35:13.568592   99930 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 22:35:13.568595   99930 command_runner.go:130] > # ]
	I1212 22:35:13.568602   99930 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 22:35:13.568606   99930 command_runner.go:130] > # metrics_port = 9090
	I1212 22:35:13.568614   99930 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 22:35:13.568618   99930 command_runner.go:130] > # metrics_socket = ""
	I1212 22:35:13.568622   99930 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 22:35:13.568630   99930 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 22:35:13.568636   99930 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 22:35:13.568643   99930 command_runner.go:130] > # certificate on any modification event.
	I1212 22:35:13.568647   99930 command_runner.go:130] > # metrics_cert = ""
	I1212 22:35:13.568654   99930 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 22:35:13.568659   99930 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 22:35:13.568665   99930 command_runner.go:130] > # metrics_key = ""
	I1212 22:35:13.568671   99930 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 22:35:13.568677   99930 command_runner.go:130] > [crio.tracing]
	I1212 22:35:13.568682   99930 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 22:35:13.568693   99930 command_runner.go:130] > # enable_tracing = false
	I1212 22:35:13.568701   99930 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 22:35:13.568705   99930 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 22:35:13.568710   99930 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 22:35:13.568717   99930 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 22:35:13.568723   99930 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 22:35:13.568729   99930 command_runner.go:130] > [crio.stats]
	I1212 22:35:13.568735   99930 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 22:35:13.568746   99930 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 22:35:13.568750   99930 command_runner.go:130] > # stats_collection_period = 0
	I1212 22:35:13.569660   99930 command_runner.go:130] ! time="2023-12-12 22:35:13.511739281Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 22:35:13.569688   99930 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 22:35:13.569824   99930 cni.go:84] Creating CNI manager for ""
	I1212 22:35:13.569842   99930 cni.go:136] 3 nodes found, recommending kindnet
	I1212 22:35:13.569862   99930 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:35:13.569880   99930 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-054207 NodeName:multinode-054207 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:35:13.570013   99930 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-054207"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:35:13.570078   99930 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-054207 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:35:13.570131   99930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:35:13.580707   99930 command_runner.go:130] > kubeadm
	I1212 22:35:13.580734   99930 command_runner.go:130] > kubectl
	I1212 22:35:13.580741   99930 command_runner.go:130] > kubelet
	I1212 22:35:13.580772   99930 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:35:13.580848   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:35:13.590767   99930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1212 22:35:13.607818   99930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:35:13.624837   99930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1212 22:35:13.643144   99930 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1212 22:35:13.647344   99930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:35:13.660222   99930 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207 for IP: 192.168.39.172
	I1212 22:35:13.660270   99930 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:35:13.660468   99930 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 22:35:13.660536   99930 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 22:35:13.660646   99930 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key
	I1212 22:35:13.660734   99930 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key.ee96354a
	I1212 22:35:13.660800   99930 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.key
	I1212 22:35:13.660816   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 22:35:13.660839   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 22:35:13.660869   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 22:35:13.660891   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 22:35:13.660912   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:35:13.660935   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:35:13.660955   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:35:13.660976   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:35:13.661062   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 22:35:13.661128   99930 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 22:35:13.661154   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 22:35:13.661194   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:35:13.661236   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:35:13.661274   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 22:35:13.661341   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:35:13.661379   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:35:13.661403   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem -> /usr/share/ca-certificates/83825.pem
	I1212 22:35:13.661425   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /usr/share/ca-certificates/838252.pem
	I1212 22:35:13.662358   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:35:13.687221   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 22:35:13.711780   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:35:13.735669   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:35:13.762256   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:35:13.786803   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 22:35:13.812128   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:35:13.837027   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 22:35:13.861241   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:35:13.885839   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 22:35:13.909810   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 22:35:13.933855   99930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:35:13.950795   99930 ssh_runner.go:195] Run: openssl version
	I1212 22:35:13.956759   99930 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 22:35:13.956842   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:35:13.966937   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:35:13.971754   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:35:13.971802   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:35:13.971846   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:35:13.977304   99930 command_runner.go:130] > b5213941
	I1212 22:35:13.977574   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:35:13.987505   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 22:35:13.997294   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 22:35:14.002892   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:35:14.003068   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:35:14.003144   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 22:35:14.009345   99930 command_runner.go:130] > 51391683
	I1212 22:35:14.009436   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 22:35:14.020463   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 22:35:14.031375   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 22:35:14.036316   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:35:14.036465   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:35:14.036521   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 22:35:14.042218   99930 command_runner.go:130] > 3ec20f2e
	I1212 22:35:14.042548   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:35:14.053173   99930 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:35:14.057722   99930 command_runner.go:130] > ca.crt
	I1212 22:35:14.057743   99930 command_runner.go:130] > ca.key
	I1212 22:35:14.057751   99930 command_runner.go:130] > healthcheck-client.crt
	I1212 22:35:14.057757   99930 command_runner.go:130] > healthcheck-client.key
	I1212 22:35:14.057764   99930 command_runner.go:130] > peer.crt
	I1212 22:35:14.057770   99930 command_runner.go:130] > peer.key
	I1212 22:35:14.057776   99930 command_runner.go:130] > server.crt
	I1212 22:35:14.057794   99930 command_runner.go:130] > server.key
	I1212 22:35:14.057964   99930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 22:35:14.063983   99930 command_runner.go:130] > Certificate will not expire
	I1212 22:35:14.064067   99930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 22:35:14.070003   99930 command_runner.go:130] > Certificate will not expire
	I1212 22:35:14.070162   99930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 22:35:14.076491   99930 command_runner.go:130] > Certificate will not expire
	I1212 22:35:14.076560   99930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 22:35:14.082602   99930 command_runner.go:130] > Certificate will not expire
	I1212 22:35:14.082671   99930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 22:35:14.088702   99930 command_runner.go:130] > Certificate will not expire
	I1212 22:35:14.088778   99930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 22:35:14.094983   99930 command_runner.go:130] > Certificate will not expire
	I1212 22:35:14.095056   99930 kubeadm.go:404] StartCluster: {Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:35:14.095204   99930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:35:14.095278   99930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:35:14.134179   99930 cri.go:89] found id: ""
	I1212 22:35:14.134265   99930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:35:14.144089   99930 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 22:35:14.144121   99930 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 22:35:14.144130   99930 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 22:35:14.144141   99930 command_runner.go:130] > member
	I1212 22:35:14.144173   99930 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 22:35:14.144197   99930 kubeadm.go:636] restartCluster start
	I1212 22:35:14.144249   99930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 22:35:14.153439   99930 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:14.154120   99930 kubeconfig.go:92] found "multinode-054207" server: "https://192.168.39.172:8443"
	I1212 22:35:14.154558   99930 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:35:14.154836   99930 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:35:14.155571   99930 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 22:35:14.156129   99930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 22:35:14.164699   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:14.164773   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:14.175377   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:14.175398   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:14.175437   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:14.186085   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:14.686880   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:14.711434   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:14.722990   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:15.186240   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:15.186333   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:15.197981   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:15.686546   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:15.686679   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:15.699969   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:16.186522   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:16.186629   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:16.198492   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:16.686700   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:16.686794   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:16.699695   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:17.186216   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:17.186324   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:17.198049   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:17.687207   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:17.687332   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:17.699029   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:18.186550   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:18.186656   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:18.198431   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:18.687021   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:18.687117   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:18.700169   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:19.186588   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:19.186724   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:19.198727   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:19.686727   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:19.686837   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:19.698433   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:20.187035   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:20.187148   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:20.200861   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:20.686377   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:20.686478   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:20.698312   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:21.186939   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:21.187067   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:21.198702   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:21.686238   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:21.686362   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:21.698288   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:22.186926   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:22.187033   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:22.198992   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:22.687116   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:22.687197   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:22.698897   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:23.186426   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:23.186519   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:23.198223   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:23.686741   99930 api_server.go:166] Checking apiserver status ...
	I1212 22:35:23.686833   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 22:35:23.698727   99930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 22:35:24.165511   99930 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 22:35:24.165541   99930 kubeadm.go:1135] stopping kube-system containers ...
	I1212 22:35:24.165556   99930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 22:35:24.165611   99930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:35:24.210686   99930 cri.go:89] found id: ""
	I1212 22:35:24.210781   99930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 22:35:24.226349   99930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:35:24.235499   99930 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 22:35:24.235529   99930 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 22:35:24.235537   99930 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 22:35:24.235544   99930 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:35:24.235578   99930 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:35:24.235620   99930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:35:24.245120   99930 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 22:35:24.245145   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 22:35:24.355112   99930 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:35:24.355136   99930 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 22:35:24.355144   99930 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 22:35:24.355150   99930 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 22:35:24.355156   99930 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1212 22:35:24.355165   99930 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1212 22:35:24.355181   99930 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1212 22:35:24.355194   99930 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1212 22:35:24.355207   99930 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1212 22:35:24.355219   99930 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 22:35:24.355229   99930 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 22:35:24.355250   99930 command_runner.go:130] > [certs] Using the existing "sa" key
	I1212 22:35:24.355340   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 22:35:24.408615   99930 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:35:24.759337   99930 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:35:24.852638   99930 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:35:24.912215   99930 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:35:25.054843   99930 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:35:25.057780   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 22:35:25.121264   99930 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:35:25.122430   99930 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:35:25.122523   99930 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 22:35:25.243466   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 22:35:25.326825   99930 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:35:25.326857   99930 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:35:25.326866   99930 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:35:25.326877   99930 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:35:25.326908   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 22:35:25.390321   99930 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:35:25.390365   99930 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:35:25.390429   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:35:25.408168   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:35:25.922506   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:35:26.422678   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:35:26.922569   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:35:27.422105   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:35:27.922122   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:35:27.946594   99930 command_runner.go:130] > 1067
	I1212 22:35:27.946654   99930 api_server.go:72] duration metric: took 2.556287461s to wait for apiserver process to appear ...
	I1212 22:35:27.946704   99930 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:35:27.946725   99930 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1212 22:35:31.437601   99930 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 22:35:31.437644   99930 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 22:35:31.437657   99930 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1212 22:35:31.516033   99930 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 22:35:31.516074   99930 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 22:35:32.016802   99930 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1212 22:35:32.031447   99930 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 22:35:32.031497   99930 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 22:35:32.516680   99930 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1212 22:35:32.523069   99930 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 22:35:32.523099   99930 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 22:35:33.016625   99930 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1212 22:35:33.021988   99930 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1212 22:35:33.022130   99930 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I1212 22:35:33.022141   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:33.022155   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:33.022165   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:33.030683   99930 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 22:35:33.030726   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:33.030753   99930 round_trippers.go:580]     Audit-Id: c20d092f-36ab-4785-80b4-2588c06bd3c9
	I1212 22:35:33.030759   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:33.030764   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:33.030770   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:33.030778   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:33.030784   99930 round_trippers.go:580]     Content-Length: 264
	I1212 22:35:33.030796   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:32 GMT
	I1212 22:35:33.030818   99930 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 22:35:33.030902   99930 api_server.go:141] control plane version: v1.28.4
	I1212 22:35:33.030924   99930 api_server.go:131] duration metric: took 5.084212082s to wait for apiserver health ...
	I1212 22:35:33.030934   99930 cni.go:84] Creating CNI manager for ""
	I1212 22:35:33.030942   99930 cni.go:136] 3 nodes found, recommending kindnet
	I1212 22:35:33.032864   99930 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 22:35:33.034294   99930 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:35:33.048063   99930 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 22:35:33.048089   99930 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 22:35:33.048098   99930 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 22:35:33.048109   99930 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:35:33.048119   99930 command_runner.go:130] > Access: 2023-12-12 22:35:00.248811046 +0000
	I1212 22:35:33.048128   99930 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 22:35:33.048136   99930 command_runner.go:130] > Change: 2023-12-12 22:34:58.322811046 +0000
	I1212 22:35:33.048145   99930 command_runner.go:130] >  Birth: -
	I1212 22:35:33.049046   99930 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:35:33.049072   99930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:35:33.066484   99930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:35:34.069677   99930 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:35:34.074568   99930 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:35:34.079585   99930 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 22:35:34.093951   99930 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 22:35:34.096958   99930 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.030430573s)
	I1212 22:35:34.097044   99930 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:35:34.097195   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:35:34.097205   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.097214   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.097220   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.100961   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:34.100981   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.100996   99930 round_trippers.go:580]     Audit-Id: e5cc1349-4d0d-44e7-85a5-3255952737e0
	I1212 22:35:34.101005   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.101015   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.101023   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.101032   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.101045   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.102962   99930 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82630 chars]
	I1212 22:35:34.106876   99930 system_pods.go:59] 12 kube-system pods found
	I1212 22:35:34.106929   99930 system_pods.go:61] "coredns-5dd5756b68-rj4p4" [8bd5cacb-68c8-41e5-a91e-07e6a9739897] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 22:35:34.106941   99930 system_pods.go:61] "etcd-multinode-054207" [2c328cec-c2e2-49d1-85af-66899f444c90] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 22:35:34.106960   99930 system_pods.go:61] "kindnet-gh2q6" [e9242a8e-6502-4550-a96a-d270e77dd6cf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 22:35:34.106968   99930 system_pods.go:61] "kindnet-mth9w" [4fa2205d-2108-425a-a3c2-d8d219cad2e7] Running
	I1212 22:35:34.106979   99930 system_pods.go:61] "kindnet-nj2sh" [947b4acb-082a-436b-b68f-d253f391ee24] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 22:35:34.107000   99930 system_pods.go:61] "kube-apiserver-multinode-054207" [70bc63a6-e544-401c-90ae-7473ce8343da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 22:35:34.107016   99930 system_pods.go:61] "kube-controller-manager-multinode-054207" [9040c58b-7f77-4355-880f-991c010720f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 22:35:34.107045   99930 system_pods.go:61] "kube-proxy-jtfmt" [d38d8816-bb76-4b9d-aa24-33744ec196fa] Running
	I1212 22:35:34.107056   99930 system_pods.go:61] "kube-proxy-rnx8m" [e8875d71-d50e-44f1-92c1-db1858b4b3bb] Running
	I1212 22:35:34.107069   99930 system_pods.go:61] "kube-proxy-xfhnh" [2ca01f00-0c60-4a26-8baf-0718911a7f01] Running
	I1212 22:35:34.107080   99930 system_pods.go:61] "kube-scheduler-multinode-054207" [79f6cbd9-988a-4dc2-a910-15abd7598b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 22:35:34.107092   99930 system_pods.go:61] "storage-provisioner" [40d577b4-8d36-4f55-946d-92755b1d6998] Running
	I1212 22:35:34.107104   99930 system_pods.go:74] duration metric: took 10.043722ms to wait for pod list to return data ...
	I1212 22:35:34.107116   99930 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:35:34.107205   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I1212 22:35:34.107212   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.107220   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.107231   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.109865   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:34.109886   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.109897   99930 round_trippers.go:580]     Audit-Id: aa0c6f6a-e319-45f5-8ad8-84116b74a17c
	I1212 22:35:34.109906   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.109922   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.109930   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.109938   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.109948   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.110235   99930 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"834"},"items":[{"metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16354 chars]
	I1212 22:35:34.111076   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:35:34.111119   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:35:34.111132   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:35:34.111139   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:35:34.111144   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:35:34.111150   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:35:34.111156   99930 node_conditions.go:105] duration metric: took 4.030604ms to run NodePressure ...
	I1212 22:35:34.111179   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 22:35:34.279021   99930 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 22:35:34.339831   99930 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 22:35:34.341532   99930 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 22:35:34.341630   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1212 22:35:34.341639   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.341646   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.341652   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.349976   99930 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 22:35:34.350003   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.350011   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.350019   99930 round_trippers.go:580]     Audit-Id: 3ad54a87-e2a4-43e4-a27f-35765947f52c
	I1212 22:35:34.350026   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.350033   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.350042   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.350051   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.350498   99930 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"836"},"items":[{"metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1212 22:35:34.351516   99930 kubeadm.go:787] kubelet initialised
	I1212 22:35:34.351539   99930 kubeadm.go:788] duration metric: took 9.985947ms waiting for restarted kubelet to initialise ...
	I1212 22:35:34.351550   99930 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:35:34.351624   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:35:34.351638   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.351649   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.351659   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.355205   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:34.355265   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.355277   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.355287   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.355294   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.355307   99930 round_trippers.go:580]     Audit-Id: 214c1f28-91f5-4744-97e9-1f233d64b2a4
	I1212 22:35:34.355317   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.355350   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.356256   99930 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"836"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82630 chars]
	I1212 22:35:34.358744   99930 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:34.358837   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:34.358849   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.358859   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.358865   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.360931   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:34.360948   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.360958   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.360966   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.360981   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.360994   99930 round_trippers.go:580]     Audit-Id: 0dd68bd7-8975-4504-8ef0-542d2adec818
	I1212 22:35:34.361003   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.361016   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.361248   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1212 22:35:34.361785   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:34.361801   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.361808   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.361814   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.363609   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:34.363624   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.363633   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.363640   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.363648   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.363656   99930 round_trippers.go:580]     Audit-Id: dc7cf65e-0fef-4217-8566-72a15978bf19
	I1212 22:35:34.363666   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.363683   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.363983   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1212 22:35:34.364375   99930 pod_ready.go:97] node "multinode-054207" hosting pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:34.364400   99930 pod_ready.go:81] duration metric: took 5.630426ms waiting for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	E1212 22:35:34.364412   99930 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-054207" hosting pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:34.364427   99930 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:34.364488   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:34.364498   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.364508   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.364519   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.366295   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:34.366313   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.366322   99930 round_trippers.go:580]     Audit-Id: ac53347e-7c9d-4f31-af61-a870dd7dc345
	I1212 22:35:34.366337   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.366345   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.366357   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.366368   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.366376   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.366657   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:34.367127   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:34.367145   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.367154   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.367163   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.369484   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:34.369500   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.369519   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.369533   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.369547   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.369557   99930 round_trippers.go:580]     Audit-Id: 328be2e7-9e18-42d6-af5e-77b6dbf30fc7
	I1212 22:35:34.369566   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.369574   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.369749   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1212 22:35:34.370004   99930 pod_ready.go:97] node "multinode-054207" hosting pod "etcd-multinode-054207" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:34.370019   99930 pod_ready.go:81] duration metric: took 5.585037ms waiting for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	E1212 22:35:34.370026   99930 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-054207" hosting pod "etcd-multinode-054207" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:34.370038   99930 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:34.370091   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-054207
	I1212 22:35:34.370100   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.370107   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.370113   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.372157   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:34.372177   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.372193   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.372201   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.372209   99930 round_trippers.go:580]     Audit-Id: 7352baf0-f66a-41db-92d0-d025f48a0c3f
	I1212 22:35:34.372218   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.372227   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.372239   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.372395   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-054207","namespace":"kube-system","uid":"70bc63a6-e544-401c-90ae-7473ce8343da","resourceVersion":"762","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.172:8443","kubernetes.io/config.hash":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.mirror":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.seen":"2023-12-12T22:25:10.498243509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1212 22:35:34.372901   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:34.372920   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.372930   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.372939   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.374753   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:34.374765   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.374770   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.374775   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.374780   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.374785   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.374790   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.374800   99930 round_trippers.go:580]     Audit-Id: dd1d344f-99e2-4c44-aeee-9ade5a61fbf4
	I1212 22:35:34.375327   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1212 22:35:34.375589   99930 pod_ready.go:97] node "multinode-054207" hosting pod "kube-apiserver-multinode-054207" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:34.375605   99930 pod_ready.go:81] duration metric: took 5.5617ms waiting for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	E1212 22:35:34.375613   99930 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-054207" hosting pod "kube-apiserver-multinode-054207" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:34.375621   99930 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:34.375659   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-054207
	I1212 22:35:34.375667   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.375673   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.375679   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.377471   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:34.377489   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.377498   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.377506   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.377514   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.377522   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.377531   99930 round_trippers.go:580]     Audit-Id: c187dc28-12c4-42b0-9327-5b1bb9baf2b7
	I1212 22:35:34.377540   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.377693   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-054207","namespace":"kube-system","uid":"9040c58b-7f77-4355-880f-991c010720f7","resourceVersion":"769","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.mirror":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.seen":"2023-12-12T22:25:10.498244800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1212 22:35:34.497756   99930 request.go:629] Waited for 119.59875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:34.497822   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:34.497827   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.497835   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.497841   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.500346   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:34.500367   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.500375   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.500380   99930 round_trippers.go:580]     Audit-Id: 98861186-f6a3-4166-8631-39bbd4544e99
	I1212 22:35:34.500385   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.500390   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.500395   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.500404   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.500733   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1212 22:35:34.501158   99930 pod_ready.go:97] node "multinode-054207" hosting pod "kube-controller-manager-multinode-054207" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:34.501182   99930 pod_ready.go:81] duration metric: took 125.553011ms waiting for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	E1212 22:35:34.501200   99930 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-054207" hosting pod "kube-controller-manager-multinode-054207" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:34.501217   99930 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:34.697683   99930 request.go:629] Waited for 196.371157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:35:34.697800   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:35:34.697815   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.697831   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.697845   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.700480   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:34.700503   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.700513   99930 round_trippers.go:580]     Audit-Id: 0047c0d1-ae7c-4026-b229-62c8f9da3a47
	I1212 22:35:34.700520   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.700527   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.700534   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.700542   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.700550   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.700787   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jtfmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d38d8816-bb76-4b9d-aa24-33744ec196fa","resourceVersion":"515","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 22:35:34.897691   99930 request.go:629] Waited for 196.362025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:35:34.897761   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:35:34.897767   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:34.897775   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:34.897781   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:34.901447   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:34.901477   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:34.901487   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:34.901496   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:34.901504   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:34 GMT
	I1212 22:35:34.901522   99930 round_trippers.go:580]     Audit-Id: a066868f-0fe6-46ce-a508-c3f38bdd233d
	I1212 22:35:34.901536   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:34.901545   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:34.902476   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"748","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_27_38_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4235 chars]
	I1212 22:35:34.902749   99930 pod_ready.go:92] pod "kube-proxy-jtfmt" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:34.902764   99930 pod_ready.go:81] duration metric: took 401.536343ms waiting for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:34.902778   99930 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:35.098283   99930 request.go:629] Waited for 195.392637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:35:35.098361   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:35:35.098366   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:35.098374   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:35.098380   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:35.101179   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:35.101199   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:35.101207   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:35.101213   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:35.101221   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:35.101229   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:35 GMT
	I1212 22:35:35.101237   99930 round_trippers.go:580]     Audit-Id: b97182ed-0ceb-4551-89e2-f2b8d092d807
	I1212 22:35:35.101245   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:35.101475   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rnx8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e8875d71-d50e-44f1-92c1-db1858b4b3bb","resourceVersion":"833","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1212 22:35:35.298262   99930 request.go:629] Waited for 196.352842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:35.298352   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:35.298361   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:35.298369   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:35.298375   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:35.301062   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:35.301086   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:35.301096   99930 round_trippers.go:580]     Audit-Id: 1ef72b4d-e4c6-49ae-861a-561c3434494c
	I1212 22:35:35.301105   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:35.301111   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:35.301117   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:35.301122   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:35.301127   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:35 GMT
	I1212 22:35:35.301329   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1212 22:35:35.301659   99930 pod_ready.go:97] node "multinode-054207" hosting pod "kube-proxy-rnx8m" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:35.301677   99930 pod_ready.go:81] duration metric: took 398.893793ms waiting for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	E1212 22:35:35.301687   99930 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-054207" hosting pod "kube-proxy-rnx8m" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:35.301697   99930 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xfhnh" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:35.498175   99930 request.go:629] Waited for 196.394588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xfhnh
	I1212 22:35:35.498249   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xfhnh
	I1212 22:35:35.498262   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:35.498271   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:35.498280   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:35.502452   99930 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:35:35.502475   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:35.502483   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:35.502488   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:35.502493   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:35.502498   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:35.502516   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:35 GMT
	I1212 22:35:35.502521   99930 round_trippers.go:580]     Audit-Id: 23407055-6ec3-4f5f-a2fc-0c277a5f323a
	I1212 22:35:35.503633   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xfhnh","generateName":"kube-proxy-","namespace":"kube-system","uid":"2ca01f00-0c60-4a26-8baf-0718911a7f01","resourceVersion":"723","creationTimestamp":"2023-12-12T22:26:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 22:35:35.697353   99930 request.go:629] Waited for 193.298954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:35:35.697441   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:35:35.697446   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:35.697455   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:35.697463   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:35.700172   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:35.700190   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:35.700197   99930 round_trippers.go:580]     Audit-Id: 2728cb5e-19c4-4d3d-acc4-694752c8a392
	I1212 22:35:35.700205   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:35.700214   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:35.700221   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:35.700230   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:35.700238   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:35 GMT
	I1212 22:35:35.700367   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m03","uid":"b0e92539-35e0-4df7-a26b-9c088375b04e","resourceVersion":"753","creationTimestamp":"2023-12-12T22:27:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_27_38_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I1212 22:35:35.700640   99930 pod_ready.go:92] pod "kube-proxy-xfhnh" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:35.700653   99930 pod_ready.go:81] duration metric: took 398.950208ms waiting for pod "kube-proxy-xfhnh" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:35.700663   99930 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:35.898159   99930 request.go:629] Waited for 197.392322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:35:35.898225   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:35:35.898231   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:35.898239   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:35.898245   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:35.902277   99930 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:35:35.902307   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:35.902315   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:35 GMT
	I1212 22:35:35.902321   99930 round_trippers.go:580]     Audit-Id: a5382980-de59-4538-a618-8d0e35e28ee8
	I1212 22:35:35.902326   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:35.902334   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:35.902343   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:35.902350   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:35.902542   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-054207","namespace":"kube-system","uid":"79f6cbd9-988a-4dc2-a910-15abd7598b9c","resourceVersion":"765","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.mirror":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.seen":"2023-12-12T22:25:01.374250221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1212 22:35:36.097235   99930 request.go:629] Waited for 194.304976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:36.097322   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:36.097328   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:36.097337   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:36.097343   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:36.099988   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:36.100016   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:36.100028   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:36.100035   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:36 GMT
	I1212 22:35:36.100043   99930 round_trippers.go:580]     Audit-Id: e0a91e91-133b-4438-ab7d-1b35eccf62d8
	I1212 22:35:36.100048   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:36.100053   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:36.100059   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:36.100343   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1212 22:35:36.100824   99930 pod_ready.go:97] node "multinode-054207" hosting pod "kube-scheduler-multinode-054207" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:36.100854   99930 pod_ready.go:81] duration metric: took 400.184794ms waiting for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	E1212 22:35:36.100868   99930 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-054207" hosting pod "kube-scheduler-multinode-054207" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-054207" has status "Ready":"False"
	I1212 22:35:36.100882   99930 pod_ready.go:38] duration metric: took 1.749321052s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:35:36.100913   99930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:35:36.113383   99930 command_runner.go:130] > -16
	I1212 22:35:36.113417   99930 ops.go:34] apiserver oom_adj: -16
	I1212 22:35:36.113423   99930 kubeadm.go:640] restartCluster took 21.969220742s
	I1212 22:35:36.113433   99930 kubeadm.go:406] StartCluster complete in 22.018383132s
	I1212 22:35:36.113464   99930 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:35:36.113562   99930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:35:36.114544   99930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:35:36.114847   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:35:36.114996   99930 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 22:35:36.117816   99930 out.go:177] * Enabled addons: 
	I1212 22:35:36.115154   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:35:36.115291   99930 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:35:36.119085   99930 addons.go:502] enable addons completed in 4.099624ms: enabled=[]
	I1212 22:35:36.119373   99930 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:35:36.119740   99930 round_trippers.go:463] GET https://192.168.39.172:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:35:36.119753   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:36.119761   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:36.119767   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:36.122587   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:36.122607   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:36.122617   99930 round_trippers.go:580]     Audit-Id: 3bf944c9-9e38-4565-acb1-35336ca513ce
	I1212 22:35:36.122634   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:36.122642   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:36.122660   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:36.122670   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:36.122682   99930 round_trippers.go:580]     Content-Length: 291
	I1212 22:35:36.122689   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:36 GMT
	I1212 22:35:36.122754   99930 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6f2af7e-14ec-48d1-9818-c77045ad4244","resourceVersion":"835","creationTimestamp":"2023-12-12T22:25:10Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 22:35:36.123006   99930 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-054207" context rescaled to 1 replicas
	I1212 22:35:36.123048   99930 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:35:36.124632   99930 out.go:177] * Verifying Kubernetes components...
	I1212 22:35:36.126070   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:35:36.220081   99930 command_runner.go:130] > apiVersion: v1
	I1212 22:35:36.220109   99930 command_runner.go:130] > data:
	I1212 22:35:36.220114   99930 command_runner.go:130] >   Corefile: |
	I1212 22:35:36.220119   99930 command_runner.go:130] >     .:53 {
	I1212 22:35:36.220122   99930 command_runner.go:130] >         log
	I1212 22:35:36.220128   99930 command_runner.go:130] >         errors
	I1212 22:35:36.220132   99930 command_runner.go:130] >         health {
	I1212 22:35:36.220137   99930 command_runner.go:130] >            lameduck 5s
	I1212 22:35:36.220140   99930 command_runner.go:130] >         }
	I1212 22:35:36.220145   99930 command_runner.go:130] >         ready
	I1212 22:35:36.220150   99930 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 22:35:36.220159   99930 command_runner.go:130] >            pods insecure
	I1212 22:35:36.220165   99930 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 22:35:36.220169   99930 command_runner.go:130] >            ttl 30
	I1212 22:35:36.220173   99930 command_runner.go:130] >         }
	I1212 22:35:36.220180   99930 command_runner.go:130] >         prometheus :9153
	I1212 22:35:36.220184   99930 command_runner.go:130] >         hosts {
	I1212 22:35:36.220190   99930 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1212 22:35:36.220195   99930 command_runner.go:130] >            fallthrough
	I1212 22:35:36.220198   99930 command_runner.go:130] >         }
	I1212 22:35:36.220206   99930 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 22:35:36.220211   99930 command_runner.go:130] >            max_concurrent 1000
	I1212 22:35:36.220217   99930 command_runner.go:130] >         }
	I1212 22:35:36.220221   99930 command_runner.go:130] >         cache 30
	I1212 22:35:36.220231   99930 command_runner.go:130] >         loop
	I1212 22:35:36.220235   99930 command_runner.go:130] >         reload
	I1212 22:35:36.220243   99930 command_runner.go:130] >         loadbalance
	I1212 22:35:36.220252   99930 command_runner.go:130] >     }
	I1212 22:35:36.220261   99930 command_runner.go:130] > kind: ConfigMap
	I1212 22:35:36.220275   99930 command_runner.go:130] > metadata:
	I1212 22:35:36.220286   99930 command_runner.go:130] >   creationTimestamp: "2023-12-12T22:25:10Z"
	I1212 22:35:36.220296   99930 command_runner.go:130] >   name: coredns
	I1212 22:35:36.220307   99930 command_runner.go:130] >   namespace: kube-system
	I1212 22:35:36.220321   99930 command_runner.go:130] >   resourceVersion: "398"
	I1212 22:35:36.220332   99930 command_runner.go:130] >   uid: 731b2461-d0ec-4a4c-8705-affc9d0f579b
	I1212 22:35:36.222635   99930 node_ready.go:35] waiting up to 6m0s for node "multinode-054207" to be "Ready" ...
	I1212 22:35:36.222774   99930 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 22:35:36.298010   99930 request.go:629] Waited for 75.26473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:36.298093   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:36.298098   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:36.298106   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:36.298113   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:36.300587   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:36.300607   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:36.300614   99930 round_trippers.go:580]     Audit-Id: bff96c1d-bff8-4a1c-82ca-998f5a2a8bb5
	I1212 22:35:36.300622   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:36.300627   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:36.300632   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:36.300637   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:36.300642   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:36 GMT
	I1212 22:35:36.300828   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1212 22:35:36.497590   99930 request.go:629] Waited for 196.38141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:36.497672   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:36.497677   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:36.497685   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:36.497691   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:36.500465   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:36.500516   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:36.500528   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:36.500537   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:36.500546   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:36.500554   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:36.500563   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:36 GMT
	I1212 22:35:36.500568   99930 round_trippers.go:580]     Audit-Id: 8b7e251b-14d7-46bd-a308-be0fdc3f90fb
	I1212 22:35:36.500698   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"759","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1212 22:35:37.001872   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:37.001903   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:37.001917   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:37.001927   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:37.005631   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:37.005657   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:37.005664   99930 round_trippers.go:580]     Audit-Id: fb76f00a-6245-4938-bc7c-918b46e40bc5
	I1212 22:35:37.005670   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:37.005675   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:37.005680   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:37.005685   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:37.005690   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:36 GMT
	I1212 22:35:37.005861   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:37.006196   99930 node_ready.go:49] node "multinode-054207" has status "Ready":"True"
	I1212 22:35:37.006212   99930 node_ready.go:38] duration metric: took 783.549784ms waiting for node "multinode-054207" to be "Ready" ...
	I1212 22:35:37.006221   99930 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:35:37.006299   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:35:37.006312   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:37.006323   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:37.006336   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:37.009910   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:37.009934   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:37.009949   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:36 GMT
	I1212 22:35:37.009958   99930 round_trippers.go:580]     Audit-Id: 09088e40-4288-499d-8a86-ab4f321e04b0
	I1212 22:35:37.009967   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:37.009999   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:37.010009   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:37.010014   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:37.011424   99930 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"846"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82949 chars]
	I1212 22:35:37.013968   99930 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:37.097282   99930 request.go:629] Waited for 83.203336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:37.097354   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:37.097361   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:37.097368   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:37.097375   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:37.100214   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:37.100240   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:37.100247   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:37.100252   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:37.100257   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:37 GMT
	I1212 22:35:37.100262   99930 round_trippers.go:580]     Audit-Id: aacb00a0-b0d5-4bfd-bfe5-b0012f98ec4a
	I1212 22:35:37.100267   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:37.100276   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:37.100408   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1212 22:35:37.298082   99930 request.go:629] Waited for 197.179373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:37.298192   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:37.298200   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:37.298211   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:37.298225   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:37.301120   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:37.301148   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:37.301157   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:37 GMT
	I1212 22:35:37.301162   99930 round_trippers.go:580]     Audit-Id: 27d8a607-cfe4-4c66-a541-3bd19d83d2a5
	I1212 22:35:37.301168   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:37.301173   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:37.301179   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:37.301185   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:37.301378   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:37.498234   99930 request.go:629] Waited for 196.353393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:37.498313   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:37.498318   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:37.498326   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:37.498332   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:37.501126   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:37.501146   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:37.501156   99930 round_trippers.go:580]     Audit-Id: 90bd76f6-3805-4a9e-b86e-8bb746296b2b
	I1212 22:35:37.501164   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:37.501172   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:37.501180   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:37.501188   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:37.501194   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:37 GMT
	I1212 22:35:37.501359   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1212 22:35:37.698274   99930 request.go:629] Waited for 196.399826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:37.698335   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:37.698340   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:37.698348   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:37.698353   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:37.701204   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:37.701231   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:37.701249   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:37.701263   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:37.701271   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:37 GMT
	I1212 22:35:37.701276   99930 round_trippers.go:580]     Audit-Id: dc966e2f-697c-4311-b47d-436fdf39aa69
	I1212 22:35:37.701281   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:37.701286   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:37.701547   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:38.202185   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:38.202215   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:38.202223   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:38.202229   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:38.204922   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:38.204943   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:38.204950   99930 round_trippers.go:580]     Audit-Id: f3838a5e-25ff-45ef-b56b-7ed7ce2f7479
	I1212 22:35:38.204956   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:38.204961   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:38.204968   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:38.204977   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:38.204985   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:38 GMT
	I1212 22:35:38.205329   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1212 22:35:38.205798   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:38.205815   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:38.205822   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:38.205828   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:38.208259   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:38.208278   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:38.208289   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:38.208298   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:38.208307   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:38 GMT
	I1212 22:35:38.208315   99930 round_trippers.go:580]     Audit-Id: ced2b02c-b3e3-4e4d-9a6f-b1ad72d1b22b
	I1212 22:35:38.208323   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:38.208335   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:38.208458   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:38.702120   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:38.702150   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:38.702162   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:38.702170   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:38.705316   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:38.705344   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:38.705355   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:38.705363   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:38.705371   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:38.705379   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:38.705387   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:38 GMT
	I1212 22:35:38.705395   99930 round_trippers.go:580]     Audit-Id: 2e2490e9-e176-480c-955b-c13e19626c16
	I1212 22:35:38.705998   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1212 22:35:38.706539   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:38.706558   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:38.706565   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:38.706571   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:38.709139   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:38.709161   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:38.709167   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:38.709172   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:38.709177   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:38.709182   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:38.709188   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:38 GMT
	I1212 22:35:38.709193   99930 round_trippers.go:580]     Audit-Id: da400efa-be03-4695-aa45-db9a8144ece6
	I1212 22:35:38.709291   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:39.203046   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:39.203080   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:39.203092   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:39.203101   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:39.209367   99930 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 22:35:39.209396   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:39.209403   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:39 GMT
	I1212 22:35:39.209409   99930 round_trippers.go:580]     Audit-Id: da995cf4-c161-458d-a1f9-027d03c9433b
	I1212 22:35:39.209414   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:39.209422   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:39.209433   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:39.209438   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:39.210211   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1212 22:35:39.210882   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:39.210907   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:39.210920   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:39.210931   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:39.219030   99930 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1212 22:35:39.219057   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:39.219066   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:39.219071   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:39.219077   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:39.219082   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:39 GMT
	I1212 22:35:39.219087   99930 round_trippers.go:580]     Audit-Id: df16ce76-2fa0-488d-8ce1-83927fe90cbc
	I1212 22:35:39.219092   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:39.220218   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:39.220591   99930 pod_ready.go:102] pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace has status "Ready":"False"
	I1212 22:35:39.702528   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:39.702558   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:39.702569   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:39.702578   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:39.709300   99930 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 22:35:39.709330   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:39.709341   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:39 GMT
	I1212 22:35:39.709349   99930 round_trippers.go:580]     Audit-Id: b99f601f-edbc-448a-b9ca-a8ce3cab65eb
	I1212 22:35:39.709355   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:39.709363   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:39.709371   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:39.709378   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:39.709570   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1212 22:35:39.710104   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:39.710123   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:39.710131   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:39.710136   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:39.713613   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:39.713647   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:39.713659   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:39.713668   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:39.713679   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:39 GMT
	I1212 22:35:39.713685   99930 round_trippers.go:580]     Audit-Id: 09d4e091-8037-4530-8f54-972d9d534d65
	I1212 22:35:39.713693   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:39.713702   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:39.713811   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:40.202391   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:40.202415   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:40.202427   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:40.202433   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:40.206124   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:40.206157   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:40.206165   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:40.206174   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:40 GMT
	I1212 22:35:40.206183   99930 round_trippers.go:580]     Audit-Id: 1332a3a2-fc71-4699-a6b4-5b4bc5b30e57
	I1212 22:35:40.206205   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:40.206215   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:40.206223   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:40.206393   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"782","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1212 22:35:40.206994   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:40.207012   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:40.207019   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:40.207029   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:40.209921   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:40.209942   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:40.209951   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:40 GMT
	I1212 22:35:40.209959   99930 round_trippers.go:580]     Audit-Id: aaf250c4-b730-48d8-818e-c30ecd2e4155
	I1212 22:35:40.209968   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:40.209978   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:40.209987   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:40.209995   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:40.210147   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:40.702860   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:35:40.702887   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:40.702896   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:40.702910   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:40.706757   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:40.706776   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:40.706784   99930 round_trippers.go:580]     Audit-Id: 3f31a108-3631-426a-bd87-a15c4eaeaffa
	I1212 22:35:40.706789   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:40.706794   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:40.706799   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:40.706805   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:40.706810   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:40 GMT
	I1212 22:35:40.707670   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"871","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1212 22:35:40.708133   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:40.708149   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:40.708164   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:40.708173   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:40.710614   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:40.710636   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:40.710646   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:40.710654   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:40.710661   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:40 GMT
	I1212 22:35:40.710669   99930 round_trippers.go:580]     Audit-Id: 10b75aa7-ae98-4da4-ba33-2c38e5db9847
	I1212 22:35:40.710685   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:40.710693   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:40.711061   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:40.711450   99930 pod_ready.go:92] pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:40.711473   99930 pod_ready.go:81] duration metric: took 3.697482202s waiting for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:40.711492   99930 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:40.711550   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:40.711559   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:40.711567   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:40.711574   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:40.719354   99930 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 22:35:40.719382   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:40.719394   99930 round_trippers.go:580]     Audit-Id: f68c41e3-df42-4282-9dfb-98307ac79129
	I1212 22:35:40.719403   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:40.719411   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:40.719423   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:40.719440   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:40.719452   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:40 GMT
	I1212 22:35:40.719618   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:40.720153   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:40.720174   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:40.720182   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:40.720188   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:40.722330   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:40.722352   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:40.722361   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:40.722369   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:40.722378   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:40.722387   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:40 GMT
	I1212 22:35:40.722398   99930 round_trippers.go:580]     Audit-Id: 42c40391-562b-4abe-a311-44212dd35c23
	I1212 22:35:40.722410   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:40.722608   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:40.723009   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:40.723024   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:40.723034   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:40.723040   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:40.725309   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:40.725326   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:40.725333   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:40.725338   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:40.725343   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:40 GMT
	I1212 22:35:40.725349   99930 round_trippers.go:580]     Audit-Id: c43f1399-640a-4616-82ae-c07f6a13f3c3
	I1212 22:35:40.725354   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:40.725361   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:40.725555   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:40.897325   99930 request.go:629] Waited for 171.278675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:40.897417   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:40.897428   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:40.897440   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:40.897454   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:40.900446   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:40.900475   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:40.900490   99930 round_trippers.go:580]     Audit-Id: ab0e97d5-667a-4ce3-a13d-35ddd2ced009
	I1212 22:35:40.900499   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:40.900505   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:40.900510   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:40.900518   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:40.900523   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:40 GMT
	I1212 22:35:40.900689   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:41.401967   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:41.402012   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:41.402023   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:41.402030   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:41.404655   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:41.404687   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:41.404697   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:41.404705   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:41 GMT
	I1212 22:35:41.404713   99930 round_trippers.go:580]     Audit-Id: a776d65b-61e2-4416-9d8d-6b02c79bf5e9
	I1212 22:35:41.404728   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:41.404736   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:41.404744   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:41.404915   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:41.405477   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:41.405501   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:41.405513   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:41.405523   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:41.409668   99930 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:35:41.409686   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:41.409704   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:41.409713   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:41.409722   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:41.409743   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:41.409753   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:41 GMT
	I1212 22:35:41.409766   99930 round_trippers.go:580]     Audit-Id: fa3d58bb-2e0b-4052-9392-ec00ba01d1b1
	I1212 22:35:41.409974   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:41.901588   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:41.901617   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:41.901629   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:41.901638   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:41.904237   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:41.904261   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:41.904269   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:41.904275   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:41.904280   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:41 GMT
	I1212 22:35:41.904285   99930 round_trippers.go:580]     Audit-Id: 1bbd8f0b-4fe3-4288-b43b-10131092dda1
	I1212 22:35:41.904291   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:41.904303   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:41.904495   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:41.904931   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:41.904949   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:41.904960   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:41.904968   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:41.906965   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:41.906982   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:41.906992   99930 round_trippers.go:580]     Audit-Id: a3defacc-0fcf-4d20-87f7-bce00d1f5a15
	I1212 22:35:41.906999   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:41.907007   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:41.907015   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:41.907028   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:41.907041   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:41 GMT
	I1212 22:35:41.907201   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:42.401788   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:42.401822   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:42.401833   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:42.401842   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:42.404767   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:42.404790   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:42.404798   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:42.404803   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:42 GMT
	I1212 22:35:42.404808   99930 round_trippers.go:580]     Audit-Id: f68a9ca7-22f5-4ef6-8e13-fc2972cbca22
	I1212 22:35:42.404813   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:42.404819   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:42.404827   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:42.405564   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:42.405972   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:42.405989   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:42.405999   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:42.406008   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:42.408761   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:42.408786   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:42.408796   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:42.408805   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:42.408815   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:42.408823   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:42 GMT
	I1212 22:35:42.408831   99930 round_trippers.go:580]     Audit-Id: 8f83558a-dd32-4b9c-98f9-d53243037497
	I1212 22:35:42.408841   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:42.409445   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:42.901615   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:42.901641   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:42.901650   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:42.901656   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:42.905024   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:42.905049   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:42.905056   99930 round_trippers.go:580]     Audit-Id: fef8d3a4-3105-46c6-90a9-d2a73f772d05
	I1212 22:35:42.905062   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:42.905067   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:42.905074   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:42.905081   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:42.905088   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:42 GMT
	I1212 22:35:42.905530   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:42.906003   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:42.906021   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:42.906028   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:42.906034   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:42.908613   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:42.908630   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:42.908637   99930 round_trippers.go:580]     Audit-Id: 21193726-e44a-4031-a252-1dfe8cb9020f
	I1212 22:35:42.908653   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:42.908664   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:42.908674   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:42.908691   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:42.908702   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:42 GMT
	I1212 22:35:42.908827   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:42.909195   99930 pod_ready.go:102] pod "etcd-multinode-054207" in "kube-system" namespace has status "Ready":"False"
	I1212 22:35:43.401448   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:43.401475   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:43.401484   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:43.401490   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:43.404497   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:43.404520   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:43.404528   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:43 GMT
	I1212 22:35:43.404534   99930 round_trippers.go:580]     Audit-Id: 359b42a4-ffe7-4482-882e-83ada6577a5f
	I1212 22:35:43.404545   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:43.404552   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:43.404560   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:43.404568   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:43.404803   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:43.405316   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:43.405334   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:43.405342   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:43.405349   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:43.407850   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:43.407868   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:43.407875   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:43.407880   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:43.407887   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:43 GMT
	I1212 22:35:43.407894   99930 round_trippers.go:580]     Audit-Id: f773366c-c28c-471a-b1b7-53bb6b698472
	I1212 22:35:43.407902   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:43.407911   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:43.408071   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:43.901678   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:43.901706   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:43.901715   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:43.901721   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:43.904535   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:43.904561   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:43.904572   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:43.904581   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:43.904589   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:43.904596   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:43 GMT
	I1212 22:35:43.904604   99930 round_trippers.go:580]     Audit-Id: 7995d385-9d59-41c2-8f5b-fe375f93bd64
	I1212 22:35:43.904613   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:43.905116   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:43.905688   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:43.905709   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:43.905726   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:43.905740   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:43.908644   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:43.908674   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:43.908681   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:43.908687   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:43 GMT
	I1212 22:35:43.908693   99930 round_trippers.go:580]     Audit-Id: 502d27d0-757b-400b-8ec9-d8be172da02a
	I1212 22:35:43.908698   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:43.908703   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:43.908708   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:43.909047   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:44.401528   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:44.401554   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:44.401563   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:44.401569   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:44.404649   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:44.404675   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:44.404686   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:44.404695   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:44.404703   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:44.404716   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:44 GMT
	I1212 22:35:44.404726   99930 round_trippers.go:580]     Audit-Id: 2d4d0d83-f7a4-4ff8-9d3b-b4bea63e8a33
	I1212 22:35:44.404735   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:44.404886   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:44.405298   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:44.405312   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:44.405320   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:44.405325   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:44.407283   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:44.407305   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:44.407315   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:44 GMT
	I1212 22:35:44.407323   99930 round_trippers.go:580]     Audit-Id: 339c38cf-7d2c-49b8-bc60-e19b24352bc1
	I1212 22:35:44.407338   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:44.407346   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:44.407354   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:44.407363   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:44.407701   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:44.901962   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:44.901992   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:44.902000   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:44.902006   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:44.905200   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:44.905223   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:44.905230   99930 round_trippers.go:580]     Audit-Id: 19fc2c57-5151-4d34-be35-d3e41373509b
	I1212 22:35:44.905238   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:44.905246   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:44.905254   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:44.905261   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:44.905269   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:44 GMT
	I1212 22:35:44.905425   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:44.905929   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:44.905947   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:44.905956   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:44.905961   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:44.908176   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:44.908196   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:44.908206   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:44.908213   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:44.908220   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:44.908228   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:44.908236   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:44 GMT
	I1212 22:35:44.908248   99930 round_trippers.go:580]     Audit-Id: 2351d5e1-63c5-4569-9aa9-7c5cb8a68aa8
	I1212 22:35:44.908495   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:45.402230   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:45.402257   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.402265   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.402272   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.405825   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:45.405845   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.405853   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.405858   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.405867   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.405873   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.405878   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.405883   99930 round_trippers.go:580]     Audit-Id: 75687e56-4708-47b2-beff-bb5f741e4e87
	I1212 22:35:45.406120   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"757","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1212 22:35:45.406571   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:45.406587   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.406595   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.406601   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.409734   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:45.409749   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.409758   99930 round_trippers.go:580]     Audit-Id: 70c5d326-9eda-40e2-b49d-8b42a26acfc8
	I1212 22:35:45.409766   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.409774   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.409782   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.409789   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.409798   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.409978   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:45.410281   99930 pod_ready.go:102] pod "etcd-multinode-054207" in "kube-system" namespace has status "Ready":"False"
	I1212 22:35:45.901638   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:35:45.901663   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.901673   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.901679   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.904644   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:45.904665   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.904675   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.904683   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.904691   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.904700   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.904709   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.904717   99930 round_trippers.go:580]     Audit-Id: 91ceae0f-1cc6-4e95-ada8-5c52feb9f260
	I1212 22:35:45.905065   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"891","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1212 22:35:45.905470   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:45.905485   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.905495   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.905503   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.907818   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:45.907840   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.907851   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.907860   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.907869   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.907884   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.907892   99930 round_trippers.go:580]     Audit-Id: 94b3b857-563f-410a-a86b-769b183cab33
	I1212 22:35:45.907901   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.908377   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:45.908794   99930 pod_ready.go:92] pod "etcd-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:45.908814   99930 pod_ready.go:81] duration metric: took 5.197309311s waiting for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:45.908845   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:45.908925   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-054207
	I1212 22:35:45.908938   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.908951   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.908963   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.911284   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:45.911307   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.911317   99930 round_trippers.go:580]     Audit-Id: 5fb90ee6-4077-4c3f-b17e-8d838877c878
	I1212 22:35:45.911328   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.911339   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.911349   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.911361   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.911372   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.911538   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-054207","namespace":"kube-system","uid":"70bc63a6-e544-401c-90ae-7473ce8343da","resourceVersion":"882","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.172:8443","kubernetes.io/config.hash":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.mirror":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.seen":"2023-12-12T22:25:10.498243509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1212 22:35:45.912081   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:45.912099   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.912106   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.912113   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.914072   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:45.914086   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.914093   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.914098   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.914103   99930 round_trippers.go:580]     Audit-Id: 4a0b403f-ca65-413e-bfe9-417f5959cf4f
	I1212 22:35:45.914108   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.914114   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.914119   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.914272   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:45.914690   99930 pod_ready.go:92] pod "kube-apiserver-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:45.914711   99930 pod_ready.go:81] duration metric: took 5.852251ms waiting for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:45.914726   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:45.914807   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-054207
	I1212 22:35:45.914818   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.914829   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.914839   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.916566   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:45.916580   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.916586   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.916591   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.916596   99930 round_trippers.go:580]     Audit-Id: 21a9b778-7e7e-43b6-b038-b27c20d36cca
	I1212 22:35:45.916601   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.916608   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.916614   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.916788   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-054207","namespace":"kube-system","uid":"9040c58b-7f77-4355-880f-991c010720f7","resourceVersion":"769","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.mirror":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.seen":"2023-12-12T22:25:10.498244800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1212 22:35:45.917290   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:45.917310   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.917321   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.917332   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.919209   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:45.919230   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.919248   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.919255   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.919267   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.919278   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.919284   99930 round_trippers.go:580]     Audit-Id: dba4b451-098e-4c0e-b710-cf4c1ca50b59
	I1212 22:35:45.919292   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.919463   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:45.919831   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-054207
	I1212 22:35:45.919847   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:45.919854   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:45.919861   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:45.921694   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:45.921714   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:45.921723   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:45.921730   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:45.921735   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:45.921741   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:45.921746   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:45 GMT
	I1212 22:35:45.921751   99930 round_trippers.go:580]     Audit-Id: b82d9521-dcc4-40e5-8f24-26c38622777d
	I1212 22:35:45.921944   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-054207","namespace":"kube-system","uid":"9040c58b-7f77-4355-880f-991c010720f7","resourceVersion":"769","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.mirror":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.seen":"2023-12-12T22:25:10.498244800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1212 22:35:46.097761   99930 request.go:629] Waited for 175.378083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:46.097838   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:46.097845   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:46.097856   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:46.097866   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:46.100553   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:46.100584   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:46.100597   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:46.100609   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:46.100621   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:46.100632   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:46 GMT
	I1212 22:35:46.100638   99930 round_trippers.go:580]     Audit-Id: 8552988d-38ef-4555-9308-dd5a72b8ef18
	I1212 22:35:46.100643   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:46.100827   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:46.602294   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-054207
	I1212 22:35:46.602323   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:46.602332   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:46.602338   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:46.605277   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:46.605299   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:46.605307   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:46 GMT
	I1212 22:35:46.605313   99930 round_trippers.go:580]     Audit-Id: 806c5bdb-06a1-41c1-8abe-0eac44afffba
	I1212 22:35:46.605318   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:46.605328   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:46.605333   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:46.605338   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:46.605561   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-054207","namespace":"kube-system","uid":"9040c58b-7f77-4355-880f-991c010720f7","resourceVersion":"893","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.mirror":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.seen":"2023-12-12T22:25:10.498244800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1212 22:35:46.606003   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:46.606018   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:46.606025   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:46.606031   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:46.608601   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:46.608618   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:46.608625   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:46 GMT
	I1212 22:35:46.608630   99930 round_trippers.go:580]     Audit-Id: 20419c7a-ca03-4cbf-abfa-187d4c277c5e
	I1212 22:35:46.608638   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:46.608656   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:46.608665   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:46.608676   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:46.609390   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:46.609766   99930 pod_ready.go:92] pod "kube-controller-manager-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:46.609787   99930 pod_ready.go:81] duration metric: took 695.047766ms waiting for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:46.609800   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:46.698107   99930 request.go:629] Waited for 88.236808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:35:46.698182   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:35:46.698186   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:46.698197   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:46.698207   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:46.701059   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:46.701089   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:46.701118   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:46 GMT
	I1212 22:35:46.701128   99930 round_trippers.go:580]     Audit-Id: 9a87fa87-2dcd-4fcf-be7b-75cf45e1f537
	I1212 22:35:46.701137   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:46.701147   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:46.701156   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:46.701169   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:46.701648   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jtfmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d38d8816-bb76-4b9d-aa24-33744ec196fa","resourceVersion":"515","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 22:35:46.897545   99930 request.go:629] Waited for 195.349803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:35:46.897614   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:35:46.897619   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:46.897627   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:46.897636   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:46.900486   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:46.900508   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:46.900515   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:46 GMT
	I1212 22:35:46.900521   99930 round_trippers.go:580]     Audit-Id: 2315f55a-7a03-4407-a666-dae327db4270
	I1212 22:35:46.900527   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:46.900534   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:46.900542   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:46.900549   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:46.900654   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4","resourceVersion":"748","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_27_38_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4235 chars]
	I1212 22:35:46.901007   99930 pod_ready.go:92] pod "kube-proxy-jtfmt" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:46.901028   99930 pod_ready.go:81] duration metric: took 291.218869ms waiting for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:46.901040   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:47.097426   99930 request.go:629] Waited for 196.315493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:35:47.097514   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:35:47.097550   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:47.097560   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:47.097571   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:47.100604   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:47.100630   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:47.100641   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:47.100650   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:47.100658   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:47.100667   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:47.100675   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:47 GMT
	I1212 22:35:47.100684   99930 round_trippers.go:580]     Audit-Id: 609e1b38-c616-4485-99aa-17ffda9015df
	I1212 22:35:47.100811   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rnx8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e8875d71-d50e-44f1-92c1-db1858b4b3bb","resourceVersion":"833","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1212 22:35:47.297664   99930 request.go:629] Waited for 196.371053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:47.297735   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:47.297741   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:47.297749   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:47.297755   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:47.300689   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:47.300714   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:47.300721   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:47.300726   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:47.300732   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:47.300738   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:47 GMT
	I1212 22:35:47.300743   99930 round_trippers.go:580]     Audit-Id: 0fba9806-5fa5-4ad5-8d9f-89dc66c53848
	I1212 22:35:47.300749   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:47.301048   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:47.301355   99930 pod_ready.go:92] pod "kube-proxy-rnx8m" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:47.301368   99930 pod_ready.go:81] duration metric: took 400.321207ms waiting for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:47.301380   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xfhnh" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:47.497915   99930 request.go:629] Waited for 196.46585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xfhnh
	I1212 22:35:47.497975   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xfhnh
	I1212 22:35:47.497980   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:47.497988   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:47.498000   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:47.501212   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:47.501269   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:47.501312   99930 round_trippers.go:580]     Audit-Id: 4a0c7f9f-5d23-40d1-b4ca-2d73738d0435
	I1212 22:35:47.501328   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:47.501339   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:47.501351   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:47.501362   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:47.501369   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:47 GMT
	I1212 22:35:47.501510   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xfhnh","generateName":"kube-proxy-","namespace":"kube-system","uid":"2ca01f00-0c60-4a26-8baf-0718911a7f01","resourceVersion":"723","creationTimestamp":"2023-12-12T22:26:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 22:35:47.697351   99930 request.go:629] Waited for 195.307196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:35:47.697430   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:35:47.697436   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:47.697444   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:47.697450   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:47.700379   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:47.700409   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:47.700419   99930 round_trippers.go:580]     Audit-Id: a5289d9b-08ef-44af-9c67-38c0a8266565
	I1212 22:35:47.700425   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:47.700430   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:47.700435   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:47.700440   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:47.700445   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:47 GMT
	I1212 22:35:47.700721   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m03","uid":"b0e92539-35e0-4df7-a26b-9c088375b04e","resourceVersion":"753","creationTimestamp":"2023-12-12T22:27:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_27_38_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I1212 22:35:47.700996   99930 pod_ready.go:92] pod "kube-proxy-xfhnh" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:47.701010   99930 pod_ready.go:81] duration metric: took 399.623158ms waiting for pod "kube-proxy-xfhnh" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:47.701019   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:47.897425   99930 request.go:629] Waited for 196.341476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:35:47.897520   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:35:47.897526   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:47.897534   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:47.897540   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:47.900541   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:47.900560   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:47.900567   99930 round_trippers.go:580]     Audit-Id: f24b2476-94b4-4fa5-947b-4cbf94a3c98d
	I1212 22:35:47.900575   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:47.900580   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:47.900588   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:47.900595   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:47.900604   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:47 GMT
	I1212 22:35:47.900779   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-054207","namespace":"kube-system","uid":"79f6cbd9-988a-4dc2-a910-15abd7598b9c","resourceVersion":"884","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.mirror":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.seen":"2023-12-12T22:25:01.374250221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1212 22:35:48.097623   99930 request.go:629] Waited for 196.384728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:48.097700   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:35:48.097705   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:48.097713   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:48.097724   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:48.100876   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:48.100898   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:48.100906   99930 round_trippers.go:580]     Audit-Id: d2b06d27-daea-4f5a-a9e0-7e9349b5478e
	I1212 22:35:48.100914   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:48.100923   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:48.100931   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:48.100941   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:48.100949   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:48 GMT
	I1212 22:35:48.101161   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1212 22:35:48.101507   99930 pod_ready.go:92] pod "kube-scheduler-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:35:48.101523   99930 pod_ready.go:81] duration metric: took 400.497924ms waiting for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:35:48.101533   99930 pod_ready.go:38] duration metric: took 11.095285917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:35:48.101547   99930 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:35:48.101605   99930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:35:48.115215   99930 command_runner.go:130] > 1067
	I1212 22:35:48.115286   99930 api_server.go:72] duration metric: took 11.992209789s to wait for apiserver process to appear ...
	I1212 22:35:48.115297   99930 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:35:48.115313   99930 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1212 22:35:48.120735   99930 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1212 22:35:48.120810   99930 round_trippers.go:463] GET https://192.168.39.172:8443/version
	I1212 22:35:48.120822   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:48.120830   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:48.120836   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:48.121988   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:35:48.122011   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:48.122022   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:48.122030   99930 round_trippers.go:580]     Content-Length: 264
	I1212 22:35:48.122039   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:48 GMT
	I1212 22:35:48.122048   99930 round_trippers.go:580]     Audit-Id: b60b7f5b-caea-4f0f-a1c6-44f773a529d0
	I1212 22:35:48.122058   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:48.122070   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:48.122079   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:48.122129   99930 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 22:35:48.122182   99930 api_server.go:141] control plane version: v1.28.4
	I1212 22:35:48.122194   99930 api_server.go:131] duration metric: took 6.891487ms to wait for apiserver health ...
	I1212 22:35:48.122202   99930 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:35:48.297684   99930 request.go:629] Waited for 175.368959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:35:48.297746   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:35:48.297751   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:48.297759   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:48.297765   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:48.305025   99930 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1212 22:35:48.305048   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:48.305055   99930 round_trippers.go:580]     Audit-Id: 98d7bf8e-1a2f-40b4-8624-bd3581e35d25
	I1212 22:35:48.305061   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:48.305066   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:48.305071   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:48.305076   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:48.305081   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:48 GMT
	I1212 22:35:48.307648   99930 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"871","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81870 chars]
	I1212 22:35:48.310914   99930 system_pods.go:59] 12 kube-system pods found
	I1212 22:35:48.310952   99930 system_pods.go:61] "coredns-5dd5756b68-rj4p4" [8bd5cacb-68c8-41e5-a91e-07e6a9739897] Running
	I1212 22:35:48.310964   99930 system_pods.go:61] "etcd-multinode-054207" [2c328cec-c2e2-49d1-85af-66899f444c90] Running
	I1212 22:35:48.310974   99930 system_pods.go:61] "kindnet-gh2q6" [e9242a8e-6502-4550-a96a-d270e77dd6cf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 22:35:48.310984   99930 system_pods.go:61] "kindnet-mth9w" [4fa2205d-2108-425a-a3c2-d8d219cad2e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 22:35:48.310992   99930 system_pods.go:61] "kindnet-nj2sh" [947b4acb-082a-436b-b68f-d253f391ee24] Running
	I1212 22:35:48.311001   99930 system_pods.go:61] "kube-apiserver-multinode-054207" [70bc63a6-e544-401c-90ae-7473ce8343da] Running
	I1212 22:35:48.311012   99930 system_pods.go:61] "kube-controller-manager-multinode-054207" [9040c58b-7f77-4355-880f-991c010720f7] Running
	I1212 22:35:48.311023   99930 system_pods.go:61] "kube-proxy-jtfmt" [d38d8816-bb76-4b9d-aa24-33744ec196fa] Running
	I1212 22:35:48.311033   99930 system_pods.go:61] "kube-proxy-rnx8m" [e8875d71-d50e-44f1-92c1-db1858b4b3bb] Running
	I1212 22:35:48.311043   99930 system_pods.go:61] "kube-proxy-xfhnh" [2ca01f00-0c60-4a26-8baf-0718911a7f01] Running
	I1212 22:35:48.311054   99930 system_pods.go:61] "kube-scheduler-multinode-054207" [79f6cbd9-988a-4dc2-a910-15abd7598b9c] Running
	I1212 22:35:48.311065   99930 system_pods.go:61] "storage-provisioner" [40d577b4-8d36-4f55-946d-92755b1d6998] Running
	I1212 22:35:48.311078   99930 system_pods.go:74] duration metric: took 188.865408ms to wait for pod list to return data ...
	I1212 22:35:48.311088   99930 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:35:48.497556   99930 request.go:629] Waited for 186.376134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I1212 22:35:48.497620   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/default/serviceaccounts
	I1212 22:35:48.497625   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:48.497633   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:48.497639   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:48.500551   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:35:48.500573   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:48.500580   99930 round_trippers.go:580]     Audit-Id: 43012e07-0341-4338-8c55-d7acc47b2ed2
	I1212 22:35:48.500586   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:48.500591   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:48.500596   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:48.500601   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:48.500606   99930 round_trippers.go:580]     Content-Length: 261
	I1212 22:35:48.500611   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:48 GMT
	I1212 22:35:48.500635   99930 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"992432e5-3d6d-43a7-bea9-b64208472919","resourceVersion":"339","creationTimestamp":"2023-12-12T22:25:22Z"}}]}
	I1212 22:35:48.500830   99930 default_sa.go:45] found service account: "default"
	I1212 22:35:48.500847   99930 default_sa.go:55] duration metric: took 189.747945ms for default service account to be created ...
	I1212 22:35:48.500855   99930 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:35:48.697225   99930 request.go:629] Waited for 196.30517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:35:48.697310   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:35:48.697375   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:48.697414   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:48.697423   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:48.701638   99930 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:35:48.701659   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:48.701666   99930 round_trippers.go:580]     Audit-Id: c74b37bd-fda7-4db5-a686-4d311a6f5e15
	I1212 22:35:48.701672   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:48.701678   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:48.701683   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:48.701688   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:48.701693   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:48 GMT
	I1212 22:35:48.702391   99930 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"871","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81870 chars]
	I1212 22:35:48.704809   99930 system_pods.go:86] 12 kube-system pods found
	I1212 22:35:48.704833   99930 system_pods.go:89] "coredns-5dd5756b68-rj4p4" [8bd5cacb-68c8-41e5-a91e-07e6a9739897] Running
	I1212 22:35:48.704838   99930 system_pods.go:89] "etcd-multinode-054207" [2c328cec-c2e2-49d1-85af-66899f444c90] Running
	I1212 22:35:48.704846   99930 system_pods.go:89] "kindnet-gh2q6" [e9242a8e-6502-4550-a96a-d270e77dd6cf] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 22:35:48.704856   99930 system_pods.go:89] "kindnet-mth9w" [4fa2205d-2108-425a-a3c2-d8d219cad2e7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 22:35:48.704864   99930 system_pods.go:89] "kindnet-nj2sh" [947b4acb-082a-436b-b68f-d253f391ee24] Running
	I1212 22:35:48.704871   99930 system_pods.go:89] "kube-apiserver-multinode-054207" [70bc63a6-e544-401c-90ae-7473ce8343da] Running
	I1212 22:35:48.704886   99930 system_pods.go:89] "kube-controller-manager-multinode-054207" [9040c58b-7f77-4355-880f-991c010720f7] Running
	I1212 22:35:48.704892   99930 system_pods.go:89] "kube-proxy-jtfmt" [d38d8816-bb76-4b9d-aa24-33744ec196fa] Running
	I1212 22:35:48.704896   99930 system_pods.go:89] "kube-proxy-rnx8m" [e8875d71-d50e-44f1-92c1-db1858b4b3bb] Running
	I1212 22:35:48.704901   99930 system_pods.go:89] "kube-proxy-xfhnh" [2ca01f00-0c60-4a26-8baf-0718911a7f01] Running
	I1212 22:35:48.704905   99930 system_pods.go:89] "kube-scheduler-multinode-054207" [79f6cbd9-988a-4dc2-a910-15abd7598b9c] Running
	I1212 22:35:48.704908   99930 system_pods.go:89] "storage-provisioner" [40d577b4-8d36-4f55-946d-92755b1d6998] Running
	I1212 22:35:48.704914   99930 system_pods.go:126] duration metric: took 204.055358ms to wait for k8s-apps to be running ...
	I1212 22:35:48.704925   99930 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:35:48.704975   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:35:48.719492   99930 system_svc.go:56] duration metric: took 14.558372ms WaitForService to wait for kubelet.
	I1212 22:35:48.719516   99930 kubeadm.go:581] duration metric: took 12.596440476s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:35:48.719534   99930 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:35:48.897950   99930 request.go:629] Waited for 178.322632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I1212 22:35:48.898008   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I1212 22:35:48.898015   99930 round_trippers.go:469] Request Headers:
	I1212 22:35:48.898027   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:35:48.898037   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:35:48.901230   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:35:48.901257   99930 round_trippers.go:577] Response Headers:
	I1212 22:35:48.901268   99930 round_trippers.go:580]     Audit-Id: 4ff8baf1-f39d-4bdb-87ca-a93cc3c04417
	I1212 22:35:48.901277   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:35:48.901285   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:35:48.901293   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:35:48.901301   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:35:48.901313   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:35:48 GMT
	I1212 22:35:48.901560   99930 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"846","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16178 chars]
	I1212 22:35:48.902171   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:35:48.902193   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:35:48.902204   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:35:48.902208   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:35:48.902212   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:35:48.902216   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:35:48.902221   99930 node_conditions.go:105] duration metric: took 182.682281ms to run NodePressure ...
	I1212 22:35:48.902234   99930 start.go:228] waiting for startup goroutines ...
	I1212 22:35:48.902240   99930 start.go:233] waiting for cluster config update ...
	I1212 22:35:48.902247   99930 start.go:242] writing updated cluster config ...
	I1212 22:35:48.902661   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:35:48.902741   99930 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:35:48.906042   99930 out.go:177] * Starting worker node multinode-054207-m02 in cluster multinode-054207
	I1212 22:35:48.907465   99930 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:35:48.907486   99930 cache.go:56] Caching tarball of preloaded images
	I1212 22:35:48.907573   99930 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:35:48.907588   99930 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:35:48.907681   99930 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:35:48.907921   99930 start.go:365] acquiring machines lock for multinode-054207-m02: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:35:48.907982   99930 start.go:369] acquired machines lock for "multinode-054207-m02" in 38.906µs
	I1212 22:35:48.908002   99930 start.go:96] Skipping create...Using existing machine configuration
	I1212 22:35:48.908009   99930 fix.go:54] fixHost starting: m02
	I1212 22:35:48.908289   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:35:48.908311   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:35:48.922905   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1212 22:35:48.923405   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:35:48.923905   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:35:48.923936   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:35:48.924268   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:35:48.924442   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:35:48.924594   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetState
	I1212 22:35:48.926592   99930 fix.go:102] recreateIfNeeded on multinode-054207-m02: state=Running err=<nil>
	W1212 22:35:48.926614   99930 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 22:35:48.928677   99930 out.go:177] * Updating the running kvm2 "multinode-054207-m02" VM ...
	I1212 22:35:48.930911   99930 machine.go:88] provisioning docker machine ...
	I1212 22:35:48.930943   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:35:48.931198   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetMachineName
	I1212 22:35:48.931395   99930 buildroot.go:166] provisioning hostname "multinode-054207-m02"
	I1212 22:35:48.931420   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetMachineName
	I1212 22:35:48.931571   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:35:48.934171   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:48.934631   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:35:48.934665   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:48.934910   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:35:48.935072   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:35:48.935231   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:35:48.935357   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:35:48.935525   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:35:48.935840   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:35:48.935853   99930 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-054207-m02 && echo "multinode-054207-m02" | sudo tee /etc/hostname
	I1212 22:35:49.080847   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-054207-m02
	
	I1212 22:35:49.080878   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:35:49.083769   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.084112   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:35:49.084153   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.084351   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:35:49.084581   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:35:49.084749   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:35:49.084943   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:35:49.085119   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:35:49.085529   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:35:49.085549   99930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-054207-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-054207-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-054207-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:35:49.216417   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:35:49.216451   99930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 22:35:49.216466   99930 buildroot.go:174] setting up certificates
	I1212 22:35:49.216479   99930 provision.go:83] configureAuth start
	I1212 22:35:49.216487   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetMachineName
	I1212 22:35:49.216779   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetIP
	I1212 22:35:49.219410   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.219759   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:35:49.219783   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.219908   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:35:49.221962   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.222305   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:35:49.222336   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.222458   99930 provision.go:138] copyHostCerts
	I1212 22:35:49.222489   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:35:49.222530   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 22:35:49.222541   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:35:49.222623   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 22:35:49.222712   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:35:49.222735   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 22:35:49.222745   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:35:49.222781   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 22:35:49.222842   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:35:49.222869   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 22:35:49.222878   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:35:49.222909   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 22:35:49.222976   99930 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.multinode-054207-m02 san=[192.168.39.15 192.168.39.15 localhost 127.0.0.1 minikube multinode-054207-m02]
	I1212 22:35:49.346287   99930 provision.go:172] copyRemoteCerts
	I1212 22:35:49.346350   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:35:49.346378   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:35:49.349188   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.349624   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:35:49.349661   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.349891   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:35:49.350107   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:35:49.350280   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:35:49.350431   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:35:49.444969   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:35:49.445060   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:35:49.470064   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:35:49.470141   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 22:35:49.493589   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:35:49.493678   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:35:49.516881   99930 provision.go:86] duration metric: configureAuth took 300.387323ms
	I1212 22:35:49.516916   99930 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:35:49.517141   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:35:49.517259   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:35:49.520003   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.520398   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:35:49.520445   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:35:49.520605   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:35:49.520800   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:35:49.520966   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:35:49.521092   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:35:49.521278   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:35:49.521750   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:35:49.521777   99930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:37:20.151189   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:37:20.151230   99930 machine.go:91] provisioned docker machine in 1m31.22029303s
	I1212 22:37:20.151277   99930 start.go:300] post-start starting for "multinode-054207-m02" (driver="kvm2")
	I1212 22:37:20.151294   99930 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:37:20.151324   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:37:20.151723   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:37:20.151757   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:37:20.154965   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.155414   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:37:20.155444   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.155597   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:37:20.155872   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:37:20.156035   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:37:20.156189   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:37:20.250207   99930 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:37:20.254587   99930 command_runner.go:130] > NAME=Buildroot
	I1212 22:37:20.254620   99930 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 22:37:20.254628   99930 command_runner.go:130] > ID=buildroot
	I1212 22:37:20.254638   99930 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 22:37:20.254646   99930 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 22:37:20.254800   99930 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:37:20.254862   99930 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 22:37:20.254952   99930 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 22:37:20.255061   99930 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 22:37:20.255076   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /etc/ssl/certs/838252.pem
	I1212 22:37:20.255160   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:37:20.263999   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:37:20.289199   99930 start.go:303] post-start completed in 137.902548ms
	I1212 22:37:20.289230   99930 fix.go:56] fixHost completed within 1m31.381220093s
	I1212 22:37:20.289260   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:37:20.291823   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.292318   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:37:20.292356   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.292493   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:37:20.292718   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:37:20.292905   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:37:20.293059   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:37:20.293220   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:37:20.293544   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1212 22:37:20.293556   99930 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:37:20.432060   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702420640.423818997
	
	I1212 22:37:20.432092   99930 fix.go:206] guest clock: 1702420640.423818997
	I1212 22:37:20.432118   99930 fix.go:219] Guest: 2023-12-12 22:37:20.423818997 +0000 UTC Remote: 2023-12-12 22:37:20.2892365 +0000 UTC m=+451.049472833 (delta=134.582497ms)
	I1212 22:37:20.432146   99930 fix.go:190] guest clock delta is within tolerance: 134.582497ms
	I1212 22:37:20.432158   99930 start.go:83] releasing machines lock for "multinode-054207-m02", held for 1m31.524162887s
	I1212 22:37:20.432193   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:37:20.432508   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetIP
	I1212 22:37:20.435323   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.435812   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:37:20.435845   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.438473   99930 out.go:177] * Found network options:
	I1212 22:37:20.440203   99930 out.go:177]   - NO_PROXY=192.168.39.172
	W1212 22:37:20.441289   99930 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 22:37:20.441349   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:37:20.442065   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:37:20.442285   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:37:20.442388   99930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W1212 22:37:20.442452   99930 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 22:37:20.442461   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:37:20.442529   99930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:37:20.442553   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:37:20.445572   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.445854   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.446015   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:37:20.446039   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.446199   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:37:20.446363   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:37:20.446372   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:37:20.446407   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:20.446577   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:37:20.446657   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:37:20.446827   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:37:20.446817   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:37:20.447016   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:37:20.447160   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:37:20.711950   99930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:37:20.711986   99930 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 22:37:20.718465   99930 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 22:37:20.718516   99930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:37:20.718576   99930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:37:20.727140   99930 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 22:37:20.727169   99930 start.go:475] detecting cgroup driver to use...
	I1212 22:37:20.727228   99930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:37:20.742768   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:37:20.755195   99930 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:37:20.755263   99930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:37:20.768118   99930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:37:20.780952   99930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:37:20.906676   99930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:37:21.030874   99930 docker.go:219] disabling docker service ...
	I1212 22:37:21.030950   99930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:37:21.048308   99930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:37:21.061323   99930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:37:21.190180   99930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:37:21.345386   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:37:21.360145   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:37:21.378928   99930 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 22:37:21.378980   99930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:37:21.379038   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:37:21.389621   99930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:37:21.389703   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:37:21.399759   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:37:21.410309   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:37:21.425506   99930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:37:21.437276   99930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:37:21.446513   99930 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 22:37:21.446693   99930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:37:21.455982   99930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:37:21.608380   99930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:37:23.652745   99930 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.044324532s)
	I1212 22:37:23.652786   99930 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:37:23.652845   99930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:37:23.658417   99930 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 22:37:23.658449   99930 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 22:37:23.658464   99930 command_runner.go:130] > Device: 16h/22d	Inode: 1244        Links: 1
	I1212 22:37:23.658474   99930 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:37:23.658483   99930 command_runner.go:130] > Access: 2023-12-12 22:37:23.548767762 +0000
	I1212 22:37:23.658492   99930 command_runner.go:130] > Modify: 2023-12-12 22:37:23.548767762 +0000
	I1212 22:37:23.658500   99930 command_runner.go:130] > Change: 2023-12-12 22:37:23.548767762 +0000
	I1212 22:37:23.658516   99930 command_runner.go:130] >  Birth: -
	I1212 22:37:23.658640   99930 start.go:543] Will wait 60s for crictl version
	I1212 22:37:23.658710   99930 ssh_runner.go:195] Run: which crictl
	I1212 22:37:23.662545   99930 command_runner.go:130] > /usr/bin/crictl
	I1212 22:37:23.662805   99930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:37:23.704635   99930 command_runner.go:130] > Version:  0.1.0
	I1212 22:37:23.704665   99930 command_runner.go:130] > RuntimeName:  cri-o
	I1212 22:37:23.704670   99930 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 22:37:23.704676   99930 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 22:37:23.705951   99930 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 22:37:23.706021   99930 ssh_runner.go:195] Run: crio --version
	I1212 22:37:23.757556   99930 command_runner.go:130] > crio version 1.24.1
	I1212 22:37:23.757582   99930 command_runner.go:130] > Version:          1.24.1
	I1212 22:37:23.757597   99930 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:37:23.757602   99930 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:37:23.757608   99930 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:37:23.757614   99930 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:37:23.757618   99930 command_runner.go:130] > Compiler:         gc
	I1212 22:37:23.757623   99930 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:37:23.757628   99930 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:37:23.757635   99930 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:37:23.757639   99930 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:37:23.757644   99930 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:37:23.759195   99930 ssh_runner.go:195] Run: crio --version
	I1212 22:37:23.810063   99930 command_runner.go:130] > crio version 1.24.1
	I1212 22:37:23.810087   99930 command_runner.go:130] > Version:          1.24.1
	I1212 22:37:23.810094   99930 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:37:23.810098   99930 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:37:23.810109   99930 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:37:23.810114   99930 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:37:23.810118   99930 command_runner.go:130] > Compiler:         gc
	I1212 22:37:23.810123   99930 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:37:23.810128   99930 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:37:23.810160   99930 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:37:23.810165   99930 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:37:23.810170   99930 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:37:23.814616   99930 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 22:37:23.816157   99930 out.go:177]   - env NO_PROXY=192.168.39.172
	I1212 22:37:23.817701   99930 main.go:141] libmachine: (multinode-054207-m02) Calling .GetIP
	I1212 22:37:23.820475   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:23.820852   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:37:23.820879   99930 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:37:23.821048   99930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 22:37:23.825539   99930 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1212 22:37:23.825631   99930 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207 for IP: 192.168.39.15
	I1212 22:37:23.825662   99930 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:37:23.825846   99930 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 22:37:23.825912   99930 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 22:37:23.825935   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:37:23.825954   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:37:23.825974   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:37:23.825992   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:37:23.826052   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 22:37:23.826082   99930 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 22:37:23.826092   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 22:37:23.826114   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:37:23.826136   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:37:23.826167   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 22:37:23.826212   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:37:23.826240   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /usr/share/ca-certificates/838252.pem
	I1212 22:37:23.826252   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:37:23.826270   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem -> /usr/share/ca-certificates/83825.pem
	I1212 22:37:23.826683   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:37:23.854594   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 22:37:23.880748   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:37:23.905617   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 22:37:23.932804   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 22:37:23.959330   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:37:23.984463   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 22:37:24.008695   99930 ssh_runner.go:195] Run: openssl version
	I1212 22:37:24.014529   99930 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 22:37:24.014812   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 22:37:24.025362   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 22:37:24.030039   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:37:24.030621   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:37:24.030672   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 22:37:24.036654   99930 command_runner.go:130] > 3ec20f2e
	I1212 22:37:24.036995   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:37:24.046029   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:37:24.056264   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:37:24.061024   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:37:24.061155   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:37:24.061221   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:37:24.066665   99930 command_runner.go:130] > b5213941
	I1212 22:37:24.066744   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:37:24.075076   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 22:37:24.084975   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 22:37:24.089732   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:37:24.089766   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:37:24.089809   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 22:37:24.095099   99930 command_runner.go:130] > 51391683
	I1212 22:37:24.095381   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 22:37:24.103946   99930 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:37:24.108198   99930 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:37:24.108314   99930 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:37:24.108428   99930 ssh_runner.go:195] Run: crio config
	I1212 22:37:24.160237   99930 command_runner.go:130] ! time="2023-12-12 22:37:24.152595047Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 22:37:24.160354   99930 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 22:37:24.172199   99930 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 22:37:24.172226   99930 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 22:37:24.172240   99930 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 22:37:24.172246   99930 command_runner.go:130] > #
	I1212 22:37:24.172262   99930 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 22:37:24.172274   99930 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 22:37:24.172287   99930 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 22:37:24.172300   99930 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 22:37:24.172309   99930 command_runner.go:130] > # reload'.
	I1212 22:37:24.172320   99930 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 22:37:24.172332   99930 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 22:37:24.172345   99930 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 22:37:24.172358   99930 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 22:37:24.172367   99930 command_runner.go:130] > [crio]
	I1212 22:37:24.172376   99930 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 22:37:24.172393   99930 command_runner.go:130] > # containers images, in this directory.
	I1212 22:37:24.172404   99930 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 22:37:24.172422   99930 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 22:37:24.172433   99930 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 22:37:24.172447   99930 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 22:37:24.172460   99930 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 22:37:24.172470   99930 command_runner.go:130] > storage_driver = "overlay"
	I1212 22:37:24.172479   99930 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 22:37:24.172489   99930 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 22:37:24.172495   99930 command_runner.go:130] > storage_option = [
	I1212 22:37:24.172500   99930 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 22:37:24.172506   99930 command_runner.go:130] > ]
	I1212 22:37:24.172512   99930 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 22:37:24.172521   99930 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 22:37:24.172528   99930 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 22:37:24.172533   99930 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 22:37:24.172541   99930 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 22:37:24.172546   99930 command_runner.go:130] > # always happen on a node reboot
	I1212 22:37:24.172557   99930 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 22:37:24.172565   99930 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 22:37:24.172571   99930 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 22:37:24.172583   99930 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 22:37:24.172591   99930 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 22:37:24.172601   99930 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 22:37:24.172611   99930 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 22:37:24.172618   99930 command_runner.go:130] > # internal_wipe = true
	I1212 22:37:24.172623   99930 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 22:37:24.172631   99930 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 22:37:24.172639   99930 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 22:37:24.172645   99930 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 22:37:24.172653   99930 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 22:37:24.172657   99930 command_runner.go:130] > [crio.api]
	I1212 22:37:24.172669   99930 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 22:37:24.172679   99930 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 22:37:24.172693   99930 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 22:37:24.172703   99930 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 22:37:24.172717   99930 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 22:37:24.172729   99930 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 22:37:24.172738   99930 command_runner.go:130] > # stream_port = "0"
	I1212 22:37:24.172747   99930 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 22:37:24.172757   99930 command_runner.go:130] > # stream_enable_tls = false
	I1212 22:37:24.172767   99930 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 22:37:24.172777   99930 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 22:37:24.172787   99930 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 22:37:24.172798   99930 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 22:37:24.172805   99930 command_runner.go:130] > # minutes.
	I1212 22:37:24.172809   99930 command_runner.go:130] > # stream_tls_cert = ""
	I1212 22:37:24.172818   99930 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 22:37:24.172826   99930 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 22:37:24.172831   99930 command_runner.go:130] > # stream_tls_key = ""
	I1212 22:37:24.172838   99930 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 22:37:24.172847   99930 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 22:37:24.172853   99930 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 22:37:24.172859   99930 command_runner.go:130] > # stream_tls_ca = ""
	I1212 22:37:24.172869   99930 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:37:24.172876   99930 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 22:37:24.172883   99930 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:37:24.172890   99930 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 22:37:24.172911   99930 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 22:37:24.172920   99930 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 22:37:24.172924   99930 command_runner.go:130] > [crio.runtime]
	I1212 22:37:24.172929   99930 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 22:37:24.172935   99930 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 22:37:24.172941   99930 command_runner.go:130] > # "nofile=1024:2048"
	I1212 22:37:24.172947   99930 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 22:37:24.172954   99930 command_runner.go:130] > # default_ulimits = [
	I1212 22:37:24.172958   99930 command_runner.go:130] > # ]
	I1212 22:37:24.172965   99930 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 22:37:24.172972   99930 command_runner.go:130] > # no_pivot = false
	I1212 22:37:24.172977   99930 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 22:37:24.172985   99930 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 22:37:24.172991   99930 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 22:37:24.173000   99930 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 22:37:24.173008   99930 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 22:37:24.173017   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:37:24.173024   99930 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 22:37:24.173028   99930 command_runner.go:130] > # Cgroup setting for conmon
	I1212 22:37:24.173035   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 22:37:24.173042   99930 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 22:37:24.173048   99930 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 22:37:24.173056   99930 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 22:37:24.173062   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:37:24.173068   99930 command_runner.go:130] > conmon_env = [
	I1212 22:37:24.173075   99930 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 22:37:24.173082   99930 command_runner.go:130] > ]
	I1212 22:37:24.173089   99930 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 22:37:24.173096   99930 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 22:37:24.173101   99930 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 22:37:24.173107   99930 command_runner.go:130] > # default_env = [
	I1212 22:37:24.173111   99930 command_runner.go:130] > # ]
	I1212 22:37:24.173119   99930 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 22:37:24.173123   99930 command_runner.go:130] > # selinux = false
	I1212 22:37:24.173132   99930 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 22:37:24.173140   99930 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 22:37:24.173148   99930 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 22:37:24.173152   99930 command_runner.go:130] > # seccomp_profile = ""
	I1212 22:37:24.173160   99930 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 22:37:24.173168   99930 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 22:37:24.173175   99930 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 22:37:24.173183   99930 command_runner.go:130] > # which might increase security.
	I1212 22:37:24.173188   99930 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 22:37:24.173196   99930 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 22:37:24.173204   99930 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 22:37:24.173211   99930 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 22:37:24.173220   99930 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 22:37:24.173227   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:37:24.173231   99930 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 22:37:24.173237   99930 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 22:37:24.173245   99930 command_runner.go:130] > # the cgroup blockio controller.
	I1212 22:37:24.173252   99930 command_runner.go:130] > # blockio_config_file = ""
	I1212 22:37:24.173263   99930 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 22:37:24.173269   99930 command_runner.go:130] > # irqbalance daemon.
	I1212 22:37:24.173274   99930 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 22:37:24.173283   99930 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 22:37:24.173290   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:37:24.173294   99930 command_runner.go:130] > # rdt_config_file = ""
	I1212 22:37:24.173307   99930 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 22:37:24.173313   99930 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 22:37:24.173319   99930 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 22:37:24.173326   99930 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 22:37:24.173332   99930 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 22:37:24.173340   99930 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 22:37:24.173347   99930 command_runner.go:130] > # will be added.
	I1212 22:37:24.173351   99930 command_runner.go:130] > # default_capabilities = [
	I1212 22:37:24.173355   99930 command_runner.go:130] > # 	"CHOWN",
	I1212 22:37:24.173362   99930 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 22:37:24.173368   99930 command_runner.go:130] > # 	"FSETID",
	I1212 22:37:24.173375   99930 command_runner.go:130] > # 	"FOWNER",
	I1212 22:37:24.173378   99930 command_runner.go:130] > # 	"SETGID",
	I1212 22:37:24.173382   99930 command_runner.go:130] > # 	"SETUID",
	I1212 22:37:24.173387   99930 command_runner.go:130] > # 	"SETPCAP",
	I1212 22:37:24.173394   99930 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 22:37:24.173399   99930 command_runner.go:130] > # 	"KILL",
	I1212 22:37:24.173405   99930 command_runner.go:130] > # ]
	I1212 22:37:24.173411   99930 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 22:37:24.173419   99930 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:37:24.173426   99930 command_runner.go:130] > # default_sysctls = [
	I1212 22:37:24.173430   99930 command_runner.go:130] > # ]
	I1212 22:37:24.173436   99930 command_runner.go:130] > # List of devices on the host that a
	I1212 22:37:24.173442   99930 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 22:37:24.173448   99930 command_runner.go:130] > # allowed_devices = [
	I1212 22:37:24.173452   99930 command_runner.go:130] > # 	"/dev/fuse",
	I1212 22:37:24.173458   99930 command_runner.go:130] > # ]
	I1212 22:37:24.173463   99930 command_runner.go:130] > # List of additional devices. specified as
	I1212 22:37:24.173473   99930 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 22:37:24.173481   99930 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 22:37:24.173510   99930 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:37:24.173521   99930 command_runner.go:130] > # additional_devices = [
	I1212 22:37:24.173524   99930 command_runner.go:130] > # ]
	I1212 22:37:24.173529   99930 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 22:37:24.173533   99930 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 22:37:24.173537   99930 command_runner.go:130] > # 	"/etc/cdi",
	I1212 22:37:24.173544   99930 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 22:37:24.173547   99930 command_runner.go:130] > # ]
	I1212 22:37:24.173556   99930 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 22:37:24.173564   99930 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 22:37:24.173571   99930 command_runner.go:130] > # Defaults to false.
	I1212 22:37:24.173576   99930 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 22:37:24.173585   99930 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 22:37:24.173590   99930 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 22:37:24.173597   99930 command_runner.go:130] > # hooks_dir = [
	I1212 22:37:24.173601   99930 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 22:37:24.173607   99930 command_runner.go:130] > # ]
	I1212 22:37:24.173614   99930 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 22:37:24.173620   99930 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 22:37:24.173627   99930 command_runner.go:130] > # its default mounts from the following two files:
	I1212 22:37:24.173630   99930 command_runner.go:130] > #
	I1212 22:37:24.173636   99930 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 22:37:24.173645   99930 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 22:37:24.173653   99930 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 22:37:24.173659   99930 command_runner.go:130] > #
	I1212 22:37:24.173668   99930 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 22:37:24.173681   99930 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 22:37:24.173694   99930 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 22:37:24.173705   99930 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 22:37:24.173711   99930 command_runner.go:130] > #
	I1212 22:37:24.173718   99930 command_runner.go:130] > # default_mounts_file = ""
	I1212 22:37:24.173730   99930 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 22:37:24.173743   99930 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 22:37:24.173752   99930 command_runner.go:130] > pids_limit = 1024
	I1212 22:37:24.173763   99930 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 22:37:24.173774   99930 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 22:37:24.173780   99930 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 22:37:24.173790   99930 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 22:37:24.173797   99930 command_runner.go:130] > # log_size_max = -1
	I1212 22:37:24.173803   99930 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 22:37:24.173811   99930 command_runner.go:130] > # log_to_journald = false
	I1212 22:37:24.173817   99930 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 22:37:24.173824   99930 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 22:37:24.173829   99930 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 22:37:24.173836   99930 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 22:37:24.173842   99930 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 22:37:24.173849   99930 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 22:37:24.173855   99930 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 22:37:24.173861   99930 command_runner.go:130] > # read_only = false
	I1212 22:37:24.173867   99930 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 22:37:24.173875   99930 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 22:37:24.173882   99930 command_runner.go:130] > # live configuration reload.
	I1212 22:37:24.173886   99930 command_runner.go:130] > # log_level = "info"
	I1212 22:37:24.173892   99930 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 22:37:24.173899   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:37:24.173906   99930 command_runner.go:130] > # log_filter = ""
	I1212 22:37:24.173912   99930 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 22:37:24.173920   99930 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 22:37:24.173926   99930 command_runner.go:130] > # separated by comma.
	I1212 22:37:24.173930   99930 command_runner.go:130] > # uid_mappings = ""
	I1212 22:37:24.173938   99930 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 22:37:24.173945   99930 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 22:37:24.173951   99930 command_runner.go:130] > # separated by comma.
	I1212 22:37:24.173955   99930 command_runner.go:130] > # gid_mappings = ""
	I1212 22:37:24.173963   99930 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 22:37:24.173972   99930 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:37:24.173978   99930 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:37:24.173984   99930 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 22:37:24.173991   99930 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 22:37:24.173999   99930 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:37:24.174009   99930 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:37:24.174015   99930 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 22:37:24.174021   99930 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 22:37:24.174029   99930 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 22:37:24.174037   99930 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 22:37:24.174041   99930 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 22:37:24.174048   99930 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 22:37:24.174056   99930 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 22:37:24.174061   99930 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 22:37:24.174069   99930 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 22:37:24.174075   99930 command_runner.go:130] > drop_infra_ctr = false
	I1212 22:37:24.174084   99930 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 22:37:24.174091   99930 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 22:37:24.174101   99930 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 22:37:24.174107   99930 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 22:37:24.174115   99930 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 22:37:24.174122   99930 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 22:37:24.174129   99930 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 22:37:24.174136   99930 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 22:37:24.174143   99930 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 22:37:24.174149   99930 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 22:37:24.174157   99930 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 22:37:24.174166   99930 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 22:37:24.174172   99930 command_runner.go:130] > # default_runtime = "runc"
	I1212 22:37:24.174177   99930 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 22:37:24.174187   99930 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 22:37:24.174198   99930 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 22:37:24.174205   99930 command_runner.go:130] > # creation as a file is not desired either.
	I1212 22:37:24.174213   99930 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 22:37:24.174220   99930 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 22:37:24.174225   99930 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 22:37:24.174233   99930 command_runner.go:130] > # ]
	I1212 22:37:24.174239   99930 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 22:37:24.174248   99930 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 22:37:24.174261   99930 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 22:37:24.174269   99930 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 22:37:24.174275   99930 command_runner.go:130] > #
	I1212 22:37:24.174280   99930 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 22:37:24.174287   99930 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 22:37:24.174291   99930 command_runner.go:130] > #  runtime_type = "oci"
	I1212 22:37:24.174298   99930 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 22:37:24.174303   99930 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 22:37:24.174310   99930 command_runner.go:130] > #  allowed_annotations = []
	I1212 22:37:24.174314   99930 command_runner.go:130] > # Where:
	I1212 22:37:24.174321   99930 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 22:37:24.174328   99930 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 22:37:24.174336   99930 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 22:37:24.174345   99930 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 22:37:24.174351   99930 command_runner.go:130] > #   in $PATH.
	I1212 22:37:24.174358   99930 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 22:37:24.174365   99930 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 22:37:24.174371   99930 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 22:37:24.174377   99930 command_runner.go:130] > #   state.
	I1212 22:37:24.174383   99930 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 22:37:24.174394   99930 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 22:37:24.174403   99930 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 22:37:24.174411   99930 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 22:37:24.174419   99930 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 22:37:24.174428   99930 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 22:37:24.174433   99930 command_runner.go:130] > #   The currently recognized values are:
	I1212 22:37:24.174440   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 22:37:24.174449   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 22:37:24.174458   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 22:37:24.174466   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 22:37:24.174474   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 22:37:24.174483   99930 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 22:37:24.174491   99930 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 22:37:24.174500   99930 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 22:37:24.174506   99930 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 22:37:24.174511   99930 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 22:37:24.174518   99930 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 22:37:24.174522   99930 command_runner.go:130] > runtime_type = "oci"
	I1212 22:37:24.174530   99930 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 22:37:24.174534   99930 command_runner.go:130] > runtime_config_path = ""
	I1212 22:37:24.174541   99930 command_runner.go:130] > monitor_path = ""
	I1212 22:37:24.174545   99930 command_runner.go:130] > monitor_cgroup = ""
	I1212 22:37:24.174551   99930 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 22:37:24.174558   99930 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 22:37:24.174564   99930 command_runner.go:130] > # running containers
	I1212 22:37:24.174568   99930 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 22:37:24.174577   99930 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 22:37:24.174606   99930 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 22:37:24.174614   99930 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 22:37:24.174619   99930 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 22:37:24.174626   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 22:37:24.174631   99930 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 22:37:24.174637   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 22:37:24.174642   99930 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 22:37:24.174649   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 22:37:24.174656   99930 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 22:37:24.174665   99930 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 22:37:24.174678   99930 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 22:37:24.174694   99930 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 22:37:24.174709   99930 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 22:37:24.174721   99930 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 22:37:24.174735   99930 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 22:37:24.174751   99930 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 22:37:24.174763   99930 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 22:37:24.174776   99930 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 22:37:24.174785   99930 command_runner.go:130] > # Example:
	I1212 22:37:24.174793   99930 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 22:37:24.174804   99930 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 22:37:24.174812   99930 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 22:37:24.174820   99930 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 22:37:24.174827   99930 command_runner.go:130] > # cpuset = 0
	I1212 22:37:24.174831   99930 command_runner.go:130] > # cpushares = "0-1"
	I1212 22:37:24.174837   99930 command_runner.go:130] > # Where:
	I1212 22:37:24.174842   99930 command_runner.go:130] > # The workload name is workload-type.
	I1212 22:37:24.174853   99930 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 22:37:24.174861   99930 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 22:37:24.174869   99930 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 22:37:24.174879   99930 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 22:37:24.174885   99930 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 22:37:24.174890   99930 command_runner.go:130] > # 
	I1212 22:37:24.174897   99930 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 22:37:24.174903   99930 command_runner.go:130] > #
	I1212 22:37:24.174909   99930 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 22:37:24.174917   99930 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 22:37:24.174926   99930 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 22:37:24.174935   99930 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 22:37:24.174943   99930 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 22:37:24.174949   99930 command_runner.go:130] > [crio.image]
	I1212 22:37:24.174956   99930 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 22:37:24.174963   99930 command_runner.go:130] > # default_transport = "docker://"
	I1212 22:37:24.174969   99930 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 22:37:24.174977   99930 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:37:24.174986   99930 command_runner.go:130] > # global_auth_file = ""
	I1212 22:37:24.174993   99930 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 22:37:24.174999   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:37:24.175006   99930 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 22:37:24.175012   99930 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 22:37:24.175020   99930 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:37:24.175026   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:37:24.175032   99930 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 22:37:24.175038   99930 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 22:37:24.175047   99930 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 22:37:24.175053   99930 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 22:37:24.175062   99930 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 22:37:24.175068   99930 command_runner.go:130] > # pause_command = "/pause"
	I1212 22:37:24.175075   99930 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 22:37:24.175083   99930 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 22:37:24.175091   99930 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 22:37:24.175100   99930 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 22:37:24.175108   99930 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 22:37:24.175114   99930 command_runner.go:130] > # signature_policy = ""
	I1212 22:37:24.175123   99930 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 22:37:24.175131   99930 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 22:37:24.175135   99930 command_runner.go:130] > # changing them here.
	I1212 22:37:24.175141   99930 command_runner.go:130] > # insecure_registries = [
	I1212 22:37:24.175145   99930 command_runner.go:130] > # ]
	I1212 22:37:24.175156   99930 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 22:37:24.175165   99930 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 22:37:24.175171   99930 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 22:37:24.175177   99930 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 22:37:24.175183   99930 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 22:37:24.175189   99930 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 22:37:24.175196   99930 command_runner.go:130] > # CNI plugins.
	I1212 22:37:24.175200   99930 command_runner.go:130] > [crio.network]
	I1212 22:37:24.175209   99930 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 22:37:24.175216   99930 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 22:37:24.175221   99930 command_runner.go:130] > # cni_default_network = ""
	I1212 22:37:24.175229   99930 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 22:37:24.175254   99930 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 22:37:24.175271   99930 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 22:37:24.175278   99930 command_runner.go:130] > # plugin_dirs = [
	I1212 22:37:24.175282   99930 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 22:37:24.175288   99930 command_runner.go:130] > # ]
	I1212 22:37:24.175294   99930 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 22:37:24.175300   99930 command_runner.go:130] > [crio.metrics]
	I1212 22:37:24.175305   99930 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 22:37:24.175311   99930 command_runner.go:130] > enable_metrics = true
	I1212 22:37:24.175316   99930 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 22:37:24.175323   99930 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 22:37:24.175330   99930 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 22:37:24.175338   99930 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 22:37:24.175346   99930 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 22:37:24.175351   99930 command_runner.go:130] > # metrics_collectors = [
	I1212 22:37:24.175355   99930 command_runner.go:130] > # 	"operations",
	I1212 22:37:24.175362   99930 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 22:37:24.175367   99930 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 22:37:24.175375   99930 command_runner.go:130] > # 	"operations_errors",
	I1212 22:37:24.175381   99930 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 22:37:24.175386   99930 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 22:37:24.175390   99930 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 22:37:24.175396   99930 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 22:37:24.175401   99930 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 22:37:24.175407   99930 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 22:37:24.175411   99930 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 22:37:24.175418   99930 command_runner.go:130] > # 	"containers_oom_total",
	I1212 22:37:24.175422   99930 command_runner.go:130] > # 	"containers_oom",
	I1212 22:37:24.175427   99930 command_runner.go:130] > # 	"processes_defunct",
	I1212 22:37:24.175434   99930 command_runner.go:130] > # 	"operations_total",
	I1212 22:37:24.175445   99930 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 22:37:24.175455   99930 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 22:37:24.175465   99930 command_runner.go:130] > # 	"operations_errors_total",
	I1212 22:37:24.175475   99930 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 22:37:24.175486   99930 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 22:37:24.175496   99930 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 22:37:24.175506   99930 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 22:37:24.175516   99930 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 22:37:24.175526   99930 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 22:37:24.175535   99930 command_runner.go:130] > # ]
	I1212 22:37:24.175546   99930 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 22:37:24.175554   99930 command_runner.go:130] > # metrics_port = 9090
	I1212 22:37:24.175566   99930 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 22:37:24.175575   99930 command_runner.go:130] > # metrics_socket = ""
	I1212 22:37:24.175587   99930 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 22:37:24.175600   99930 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 22:37:24.175613   99930 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 22:37:24.175624   99930 command_runner.go:130] > # certificate on any modification event.
	I1212 22:37:24.175633   99930 command_runner.go:130] > # metrics_cert = ""
	I1212 22:37:24.175642   99930 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 22:37:24.175654   99930 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 22:37:24.175663   99930 command_runner.go:130] > # metrics_key = ""
	I1212 22:37:24.175676   99930 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 22:37:24.175685   99930 command_runner.go:130] > [crio.tracing]
	I1212 22:37:24.175696   99930 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 22:37:24.175707   99930 command_runner.go:130] > # enable_tracing = false
	I1212 22:37:24.175718   99930 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 22:37:24.175725   99930 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 22:37:24.175736   99930 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 22:37:24.175746   99930 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 22:37:24.175755   99930 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 22:37:24.175764   99930 command_runner.go:130] > [crio.stats]
	I1212 22:37:24.175774   99930 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 22:37:24.175785   99930 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 22:37:24.175795   99930 command_runner.go:130] > # stats_collection_period = 0
	I1212 22:37:24.175902   99930 cni.go:84] Creating CNI manager for ""
	I1212 22:37:24.175916   99930 cni.go:136] 3 nodes found, recommending kindnet
	I1212 22:37:24.175942   99930 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:37:24.175974   99930 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-054207 NodeName:multinode-054207-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:37:24.176099   99930 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-054207-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:37:24.176153   99930 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-054207-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:37:24.176205   99930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:37:24.185316   99930 command_runner.go:130] > kubeadm
	I1212 22:37:24.185335   99930 command_runner.go:130] > kubectl
	I1212 22:37:24.185339   99930 command_runner.go:130] > kubelet
	I1212 22:37:24.185356   99930 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:37:24.185479   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 22:37:24.194039   99930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 22:37:24.209645   99930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:37:24.225683   99930 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1212 22:37:24.229360   99930 command_runner.go:130] > 192.168.39.172	control-plane.minikube.internal
	I1212 22:37:24.229608   99930 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:37:24.229907   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:37:24.230004   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:37:24.230040   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:37:24.244638   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I1212 22:37:24.245050   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:37:24.245467   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:37:24.245490   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:37:24.245914   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:37:24.246130   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:37:24.246283   99930 start.go:304] JoinCluster: &{Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:37:24.246428   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 22:37:24.246446   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:37:24.249445   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:37:24.249880   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:37:24.249908   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:37:24.250085   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:37:24.250255   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:37:24.250437   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:37:24.250575   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:37:24.422676   99930 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token e08jg7.yi3iffc6l8tz5mkf --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 22:37:24.422962   99930 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:37:24.423008   99930 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:37:24.423457   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:37:24.423491   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:37:24.439302   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1212 22:37:24.439722   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:37:24.440184   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:37:24.440199   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:37:24.440552   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:37:24.440729   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:37:24.440896   99930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-054207-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1212 22:37:24.440925   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:37:24.443702   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:37:24.444097   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:37:24.444119   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:37:24.444243   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:37:24.444420   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:37:24.444560   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:37:24.444712   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:37:24.641472   99930 command_runner.go:130] > node/multinode-054207-m02 cordoned
	I1212 22:37:27.694254   99930 command_runner.go:130] > pod "busybox-5bc68d56bd-trmtr" has DeletionTimestamp older than 1 seconds, skipping
	I1212 22:37:27.694287   99930 command_runner.go:130] > node/multinode-054207-m02 drained
	I1212 22:37:27.695971   99930 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1212 22:37:27.696006   99930 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-gh2q6, kube-system/kube-proxy-jtfmt
	I1212 22:37:27.696037   99930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-054207-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.255112048s)
	I1212 22:37:27.696060   99930 node.go:108] successfully drained node "m02"
	I1212 22:37:27.696516   99930 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:37:27.696729   99930 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:37:27.697204   99930 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1212 22:37:27.697262   99930 round_trippers.go:463] DELETE https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:37:27.697270   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:27.697279   99930 round_trippers.go:473]     Content-Type: application/json
	I1212 22:37:27.697287   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:27.697293   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:27.710375   99930 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1212 22:37:27.710400   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:27.710409   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:27 GMT
	I1212 22:37:27.710416   99930 round_trippers.go:580]     Audit-Id: 5459f720-a25e-4b2b-a203-bac8896cb4d7
	I1212 22:37:27.710423   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:27.710431   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:27.710439   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:27.710447   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:27.710459   99930 round_trippers.go:580]     Content-Length: 171
	I1212 22:37:27.710489   99930 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-054207-m02","kind":"nodes","uid":"05ba4305-3ed3-43c4-a9fa-96840a3e51d4"}}
	I1212 22:37:27.710529   99930 node.go:124] successfully deleted node "m02"
	I1212 22:37:27.710549   99930 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:37:27.710592   99930 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:37:27.710615   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e08jg7.yi3iffc6l8tz5mkf --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-054207-m02"
	I1212 22:37:27.764249   99930 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 22:37:27.923174   99930 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 22:37:27.923202   99930 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 22:37:27.981187   99930 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:37:27.981213   99930 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:37:27.981219   99930 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 22:37:28.123410   99930 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 22:37:28.641689   99930 command_runner.go:130] > This node has joined the cluster:
	I1212 22:37:28.641721   99930 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 22:37:28.641736   99930 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 22:37:28.641747   99930 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 22:37:28.645108   99930 command_runner.go:130] ! W1212 22:37:27.756308    2686 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 22:37:28.645148   99930 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1212 22:37:28.645159   99930 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1212 22:37:28.645167   99930 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1212 22:37:28.645191   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 22:37:28.887591   99930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-054207 minikube.k8s.io/updated_at=2023_12_12T22_37_28_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:37:29.000723   99930 command_runner.go:130] > node/multinode-054207-m02 labeled
	I1212 22:37:29.015745   99930 command_runner.go:130] > node/multinode-054207-m03 labeled
	I1212 22:37:29.019057   99930 start.go:306] JoinCluster complete in 4.772768896s
	I1212 22:37:29.019084   99930 cni.go:84] Creating CNI manager for ""
	I1212 22:37:29.019093   99930 cni.go:136] 3 nodes found, recommending kindnet
	I1212 22:37:29.019163   99930 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:37:29.026851   99930 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 22:37:29.026882   99930 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 22:37:29.026894   99930 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 22:37:29.026905   99930 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:37:29.026915   99930 command_runner.go:130] > Access: 2023-12-12 22:35:00.248811046 +0000
	I1212 22:37:29.026923   99930 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 22:37:29.026931   99930 command_runner.go:130] > Change: 2023-12-12 22:34:58.322811046 +0000
	I1212 22:37:29.026937   99930 command_runner.go:130] >  Birth: -
	I1212 22:37:29.027059   99930 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:37:29.027084   99930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:37:29.046738   99930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:37:29.410006   99930 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:37:29.414225   99930 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:37:29.417697   99930 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 22:37:29.428815   99930 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 22:37:29.432263   99930 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:37:29.432510   99930 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:37:29.432925   99930 round_trippers.go:463] GET https://192.168.39.172:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:37:29.432942   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.432951   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.432960   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.434933   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:37:29.434952   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.434959   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.434965   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.434972   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.434977   99930 round_trippers.go:580]     Content-Length: 291
	I1212 22:37:29.434982   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.434988   99930 round_trippers.go:580]     Audit-Id: 116dbdbb-9c58-487c-b9fa-7204d38ea349
	I1212 22:37:29.434994   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.435014   99930 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6f2af7e-14ec-48d1-9818-c77045ad4244","resourceVersion":"890","creationTimestamp":"2023-12-12T22:25:10Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 22:37:29.435110   99930 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-054207" context rescaled to 1 replicas
	I1212 22:37:29.435145   99930 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:37:29.437006   99930 out.go:177] * Verifying Kubernetes components...
	I1212 22:37:29.438329   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:37:29.452589   99930 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:37:29.452870   99930 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:37:29.453150   99930 node_ready.go:35] waiting up to 6m0s for node "multinode-054207-m02" to be "Ready" ...
	I1212 22:37:29.453235   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:37:29.453246   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.453258   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.453269   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.455752   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:29.455780   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.455792   99930 round_trippers.go:580]     Audit-Id: 7b74fde3-0aee-40a5-8f7c-735a3b1e6d22
	I1212 22:37:29.455800   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.455809   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.455817   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.455829   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.455838   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.456023   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"255afb45-7963-4a1d-a2ad-72f01ff3d57e","resourceVersion":"1035","creationTimestamp":"2023-12-12T22:37:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_37_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:37:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1212 22:37:29.456306   99930 node_ready.go:49] node "multinode-054207-m02" has status "Ready":"True"
	I1212 22:37:29.456321   99930 node_ready.go:38] duration metric: took 3.153565ms waiting for node "multinode-054207-m02" to be "Ready" ...
	I1212 22:37:29.456333   99930 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:37:29.456391   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:37:29.456402   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.456413   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.456423   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.459867   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:37:29.459883   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.459892   99930 round_trippers.go:580]     Audit-Id: 9200dd9d-c507-4cf9-885b-dc75ed59d9ae
	I1212 22:37:29.459900   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.459908   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.459915   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.459927   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.459935   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.460945   99930 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1042"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"871","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82230 chars]
	I1212 22:37:29.463317   99930 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.463382   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:37:29.463390   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.463398   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.463404   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.465409   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:37:29.465422   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.465428   99930 round_trippers.go:580]     Audit-Id: 97f14a9a-1587-43d0-92b3-165cabc9b602
	I1212 22:37:29.465434   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.465439   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.465444   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.465451   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.465457   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.465618   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"871","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1212 22:37:29.466090   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:37:29.466106   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.466113   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.466119   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.468028   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:37:29.468044   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.468054   99930 round_trippers.go:580]     Audit-Id: 743f8d09-9647-46ae-9a08-aa50a9915433
	I1212 22:37:29.468062   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.468071   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.468086   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.468091   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.468097   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.468429   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:37:29.468700   99930 pod_ready.go:92] pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace has status "Ready":"True"
	I1212 22:37:29.468712   99930 pod_ready.go:81] duration metric: took 5.374613ms waiting for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.468721   99930 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.468762   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:37:29.468769   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.468776   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.468784   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.470542   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:37:29.470555   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.470562   99930 round_trippers.go:580]     Audit-Id: 2dfd1834-7207-472e-a544-9bff04239f54
	I1212 22:37:29.470570   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.470577   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.470592   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.470600   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.470612   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.470794   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"891","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1212 22:37:29.471146   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:37:29.471159   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.471165   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.471173   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.473261   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:29.473282   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.473291   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.473300   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.473316   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.473324   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.473335   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.473345   99930 round_trippers.go:580]     Audit-Id: 71bb893a-9aa3-4d49-b687-e5d11ba65170
	I1212 22:37:29.473492   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:37:29.473795   99930 pod_ready.go:92] pod "etcd-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:37:29.473808   99930 pod_ready.go:81] duration metric: took 5.078974ms waiting for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.473826   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.473875   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-054207
	I1212 22:37:29.473884   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.473890   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.473896   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.475838   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:37:29.475852   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.475858   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.475864   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.475870   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.475878   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.475883   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.475892   99930 round_trippers.go:580]     Audit-Id: c651e91f-145b-4f79-83e4-3b4a70af143d
	I1212 22:37:29.476312   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-054207","namespace":"kube-system","uid":"70bc63a6-e544-401c-90ae-7473ce8343da","resourceVersion":"882","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.172:8443","kubernetes.io/config.hash":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.mirror":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.seen":"2023-12-12T22:25:10.498243509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1212 22:37:29.476760   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:37:29.476775   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.476782   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.476788   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.479288   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:29.479300   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.479307   99930 round_trippers.go:580]     Audit-Id: 1efbfd1f-dc34-4d24-a5af-9d9b92ebef4d
	I1212 22:37:29.479312   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.479317   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.479324   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.479332   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.479345   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.479584   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:37:29.479883   99930 pod_ready.go:92] pod "kube-apiserver-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:37:29.479896   99930 pod_ready.go:81] duration metric: took 6.05952ms waiting for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.479905   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.479979   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-054207
	I1212 22:37:29.479990   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.479998   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.480011   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.482034   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:29.482054   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.482063   99930 round_trippers.go:580]     Audit-Id: 52a717a1-e436-493e-adb5-6fdc5886eb4b
	I1212 22:37:29.482071   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.482084   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.482092   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.482103   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.482115   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.482508   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-054207","namespace":"kube-system","uid":"9040c58b-7f77-4355-880f-991c010720f7","resourceVersion":"893","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.mirror":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.seen":"2023-12-12T22:25:10.498244800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1212 22:37:29.482895   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:37:29.482909   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.482916   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.482925   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.485151   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:29.485170   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.485179   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.485188   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.485202   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.485210   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.485222   99930 round_trippers.go:580]     Audit-Id: 1eee2c4e-e768-47c8-83b4-b571eca3e09f
	I1212 22:37:29.485232   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.485921   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:37:29.486251   99930 pod_ready.go:92] pod "kube-controller-manager-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:37:29.486266   99930 pod_ready.go:81] duration metric: took 6.351109ms waiting for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.486275   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:29.653649   99930 request.go:629] Waited for 167.288893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:37:29.653721   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:37:29.653728   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.653739   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.653748   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.657609   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:37:29.657633   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.657644   99930 round_trippers.go:580]     Audit-Id: ea14d70b-9bde-4926-874d-2a2c0835ad35
	I1212 22:37:29.657653   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.657662   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.657674   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.657681   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.657690   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.658049   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jtfmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d38d8816-bb76-4b9d-aa24-33744ec196fa","resourceVersion":"1039","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I1212 22:37:29.854031   99930 request.go:629] Waited for 195.422684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:37:29.854116   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:37:29.854124   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:29.854135   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:29.854154   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:29.860071   99930 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 22:37:29.860100   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:29.860111   99930 round_trippers.go:580]     Audit-Id: a9a9448a-45f2-4b2f-b324-8789e5d088bf
	I1212 22:37:29.860120   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:29.860128   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:29.860136   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:29.860145   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:29.860157   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:29 GMT
	I1212 22:37:29.860329   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"255afb45-7963-4a1d-a2ad-72f01ff3d57e","resourceVersion":"1035","creationTimestamp":"2023-12-12T22:37:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_37_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:37:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1212 22:37:30.054154   99930 request.go:629] Waited for 193.384164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:37:30.054228   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:37:30.054236   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:30.054257   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:30.054284   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:30.057107   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:30.057132   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:30.057142   99930 round_trippers.go:580]     Audit-Id: 63e7fc7f-bf2b-49e7-8cfe-399f1abee39a
	I1212 22:37:30.057151   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:30.057159   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:30.057167   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:30.057175   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:30.057183   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:30 GMT
	I1212 22:37:30.057401   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jtfmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d38d8816-bb76-4b9d-aa24-33744ec196fa","resourceVersion":"1039","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I1212 22:37:30.253335   99930 request.go:629] Waited for 195.349239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:37:30.253422   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:37:30.253430   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:30.253441   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:30.253451   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:30.256681   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:37:30.256710   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:30.256721   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:30.256728   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:30.256736   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:30.256745   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:30.256753   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:30 GMT
	I1212 22:37:30.256760   99930 round_trippers.go:580]     Audit-Id: 3144ab87-de45-4a5b-9e4d-87d2e0b3d2ad
	I1212 22:37:30.256968   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"255afb45-7963-4a1d-a2ad-72f01ff3d57e","resourceVersion":"1035","creationTimestamp":"2023-12-12T22:37:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_37_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:37:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1212 22:37:30.758127   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:37:30.758155   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:30.758163   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:30.758171   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:30.760804   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:30.760829   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:30.760839   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:30.760847   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:30.760855   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:30.760862   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:30.760874   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:30 GMT
	I1212 22:37:30.760883   99930 round_trippers.go:580]     Audit-Id: f79b207e-c833-4c44-9062-513c64dc0d83
	I1212 22:37:30.761118   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jtfmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d38d8816-bb76-4b9d-aa24-33744ec196fa","resourceVersion":"1051","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1212 22:37:30.761509   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:37:30.761522   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:30.761529   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:30.761535   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:30.763688   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:30.763703   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:30.763710   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:30.763715   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:30.763723   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:30.763732   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:30.763740   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:30 GMT
	I1212 22:37:30.763749   99930 round_trippers.go:580]     Audit-Id: 4fa5c997-412e-472a-9a17-6d79e596832e
	I1212 22:37:30.763903   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"255afb45-7963-4a1d-a2ad-72f01ff3d57e","resourceVersion":"1035","creationTimestamp":"2023-12-12T22:37:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_37_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:37:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1212 22:37:30.764143   99930 pod_ready.go:92] pod "kube-proxy-jtfmt" in "kube-system" namespace has status "Ready":"True"
	I1212 22:37:30.764157   99930 pod_ready.go:81] duration metric: took 1.277876539s waiting for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:30.764167   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:30.853491   99930 request.go:629] Waited for 89.253455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:37:30.853556   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:37:30.853561   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:30.853569   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:30.853575   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:30.858748   99930 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 22:37:30.858779   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:30.858786   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:30.858791   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:30.858796   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:30 GMT
	I1212 22:37:30.858801   99930 round_trippers.go:580]     Audit-Id: 1447407f-0beb-46c4-95eb-de7374468148
	I1212 22:37:30.858806   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:30.858811   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:30.858974   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rnx8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e8875d71-d50e-44f1-92c1-db1858b4b3bb","resourceVersion":"833","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1212 22:37:31.053809   99930 request.go:629] Waited for 194.384226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:37:31.053879   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:37:31.053884   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:31.053892   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:31.053898   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:31.056804   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:31.056823   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:31.056830   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:31 GMT
	I1212 22:37:31.056835   99930 round_trippers.go:580]     Audit-Id: 5cb5acec-a633-48e7-8c3d-ba9bf7d9a5b1
	I1212 22:37:31.056840   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:31.056846   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:31.056851   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:31.056856   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:31.057488   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:37:31.057876   99930 pod_ready.go:92] pod "kube-proxy-rnx8m" in "kube-system" namespace has status "Ready":"True"
	I1212 22:37:31.057897   99930 pod_ready.go:81] duration metric: took 293.724724ms waiting for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:31.057906   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xfhnh" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:31.253282   99930 request.go:629] Waited for 195.30806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xfhnh
	I1212 22:37:31.253376   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xfhnh
	I1212 22:37:31.253385   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:31.253396   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:31.253405   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:31.256468   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:37:31.256493   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:31.256503   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:31.256511   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:31.256519   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:31 GMT
	I1212 22:37:31.256526   99930 round_trippers.go:580]     Audit-Id: 3ee784ea-df89-4440-845c-78378767eb01
	I1212 22:37:31.256534   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:31.256542   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:31.256767   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xfhnh","generateName":"kube-proxy-","namespace":"kube-system","uid":"2ca01f00-0c60-4a26-8baf-0718911a7f01","resourceVersion":"723","creationTimestamp":"2023-12-12T22:26:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1212 22:37:31.453518   99930 request.go:629] Waited for 196.327192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:37:31.453597   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:37:31.453602   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:31.453610   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:31.453616   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:31.456583   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:37:31.456602   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:31.456609   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:31.456617   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:31.456625   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:31.456635   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:31.456643   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:31 GMT
	I1212 22:37:31.456651   99930 round_trippers.go:580]     Audit-Id: 57ef7e7b-47c0-40ed-a710-92478d7429ff
	I1212 22:37:31.456751   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m03","uid":"b0e92539-35e0-4df7-a26b-9c088375b04e","resourceVersion":"1036","creationTimestamp":"2023-12-12T22:27:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_37_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:27:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3965 chars]
	I1212 22:37:31.457026   99930 pod_ready.go:92] pod "kube-proxy-xfhnh" in "kube-system" namespace has status "Ready":"True"
	I1212 22:37:31.457043   99930 pod_ready.go:81] duration metric: took 399.131561ms waiting for pod "kube-proxy-xfhnh" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:31.457053   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:31.654319   99930 request.go:629] Waited for 197.195872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:37:31.654400   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:37:31.654408   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:31.654420   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:31.654431   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:31.657802   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:37:31.657828   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:31.657839   99930 round_trippers.go:580]     Audit-Id: d72ce25f-e724-40d5-8cbd-fb7a21421820
	I1212 22:37:31.657847   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:31.657856   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:31.657863   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:31.657875   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:31.657886   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:31 GMT
	I1212 22:37:31.658016   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-054207","namespace":"kube-system","uid":"79f6cbd9-988a-4dc2-a910-15abd7598b9c","resourceVersion":"884","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.mirror":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.seen":"2023-12-12T22:25:01.374250221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1212 22:37:31.853836   99930 request.go:629] Waited for 195.357737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:37:31.853898   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:37:31.853903   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:31.853910   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:31.853922   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:31.858587   99930 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 22:37:31.858616   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:31.858626   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:31.858635   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:31.858642   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:31.858651   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:31 GMT
	I1212 22:37:31.858658   99930 round_trippers.go:580]     Audit-Id: 51a36962-11ec-4def-a24d-a88370e97486
	I1212 22:37:31.858666   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:31.858977   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:37:31.859399   99930 pod_ready.go:92] pod "kube-scheduler-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:37:31.859418   99930 pod_ready.go:81] duration metric: took 402.358361ms waiting for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:37:31.859428   99930 pod_ready.go:38] duration metric: took 2.403082921s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:37:31.859442   99930 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:37:31.859489   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:37:31.873217   99930 system_svc.go:56] duration metric: took 13.76556ms WaitForService to wait for kubelet.
	I1212 22:37:31.873252   99930 kubeadm.go:581] duration metric: took 2.438081305s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:37:31.873277   99930 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:37:32.053714   99930 request.go:629] Waited for 180.348696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I1212 22:37:32.053788   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I1212 22:37:32.053795   99930 round_trippers.go:469] Request Headers:
	I1212 22:37:32.053824   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:37:32.053838   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:37:32.056911   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:37:32.056929   99930 round_trippers.go:577] Response Headers:
	I1212 22:37:32.056936   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:37:32.056941   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:37:32.056946   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:37:32.056951   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:37:32 GMT
	I1212 22:37:32.056957   99930 round_trippers.go:580]     Audit-Id: 87b2e509-d4cf-4e4f-bda3-ddf4efa056af
	I1212 22:37:32.056962   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:37:32.057532   99930 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1054"},"items":[{"metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16209 chars]
	I1212 22:37:32.058084   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:37:32.058102   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:37:32.058115   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:37:32.058119   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:37:32.058123   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:37:32.058129   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:37:32.058133   99930 node_conditions.go:105] duration metric: took 184.851051ms to run NodePressure ...
	I1212 22:37:32.058144   99930 start.go:228] waiting for startup goroutines ...
	I1212 22:37:32.058189   99930 start.go:242] writing updated cluster config ...
	I1212 22:37:32.058696   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:37:32.058826   99930 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:37:32.061568   99930 out.go:177] * Starting worker node multinode-054207-m03 in cluster multinode-054207
	I1212 22:37:32.062837   99930 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:37:32.062860   99930 cache.go:56] Caching tarball of preloaded images
	I1212 22:37:32.062951   99930 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:37:32.062964   99930 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:37:32.063112   99930 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/config.json ...
	I1212 22:37:32.063352   99930 start.go:365] acquiring machines lock for multinode-054207-m03: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 22:37:32.063398   99930 start.go:369] acquired machines lock for "multinode-054207-m03" in 25.293µs
	I1212 22:37:32.063412   99930 start.go:96] Skipping create...Using existing machine configuration
	I1212 22:37:32.063417   99930 fix.go:54] fixHost starting: m03
	I1212 22:37:32.063658   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:37:32.063677   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:37:32.078517   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I1212 22:37:32.078960   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:37:32.079375   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:37:32.079396   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:37:32.079743   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:37:32.079967   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .DriverName
	I1212 22:37:32.080124   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetState
	I1212 22:37:32.081598   99930 fix.go:102] recreateIfNeeded on multinode-054207-m03: state=Running err=<nil>
	W1212 22:37:32.081622   99930 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 22:37:32.083678   99930 out.go:177] * Updating the running kvm2 "multinode-054207-m03" VM ...
	I1212 22:37:32.085076   99930 machine.go:88] provisioning docker machine ...
	I1212 22:37:32.085094   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .DriverName
	I1212 22:37:32.085308   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetMachineName
	I1212 22:37:32.085487   99930 buildroot.go:166] provisioning hostname "multinode-054207-m03"
	I1212 22:37:32.085501   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetMachineName
	I1212 22:37:32.085637   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	I1212 22:37:32.087792   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.088205   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:37:32.088226   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.088390   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHPort
	I1212 22:37:32.088559   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:37:32.088671   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:37:32.088805   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHUsername
	I1212 22:37:32.088933   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:37:32.089254   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1212 22:37:32.089272   99930 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-054207-m03 && echo "multinode-054207-m03" | sudo tee /etc/hostname
	I1212 22:37:32.228044   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-054207-m03
	
	I1212 22:37:32.228082   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	I1212 22:37:32.230702   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.231110   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:37:32.231151   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.231311   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHPort
	I1212 22:37:32.231529   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:37:32.231690   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:37:32.231820   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHUsername
	I1212 22:37:32.231979   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:37:32.232358   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1212 22:37:32.232388   99930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-054207-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-054207-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-054207-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:37:32.352185   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:37:32.352224   99930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 22:37:32.352241   99930 buildroot.go:174] setting up certificates
	I1212 22:37:32.352253   99930 provision.go:83] configureAuth start
	I1212 22:37:32.352261   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetMachineName
	I1212 22:37:32.352610   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetIP
	I1212 22:37:32.355401   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.355798   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:37:32.355820   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.356010   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	I1212 22:37:32.358332   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.358687   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:37:32.358716   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.358832   99930 provision.go:138] copyHostCerts
	I1212 22:37:32.358866   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:37:32.358903   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 22:37:32.358916   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 22:37:32.358999   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 22:37:32.359091   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:37:32.359115   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 22:37:32.359121   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 22:37:32.359172   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 22:37:32.359258   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:37:32.359285   99930 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 22:37:32.359295   99930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 22:37:32.359330   99930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 22:37:32.359394   99930 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.multinode-054207-m03 san=[192.168.39.48 192.168.39.48 localhost 127.0.0.1 minikube multinode-054207-m03]
	I1212 22:37:32.681044   99930 provision.go:172] copyRemoteCerts
	I1212 22:37:32.681108   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:37:32.681133   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	I1212 22:37:32.684248   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.684702   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:37:32.684739   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.684903   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHPort
	I1212 22:37:32.685118   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:37:32.685263   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHUsername
	I1212 22:37:32.685413   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m03/id_rsa Username:docker}
	I1212 22:37:32.779681   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:37:32.779793   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:37:32.804125   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:37:32.804212   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 22:37:32.829119   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:37:32.829232   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 22:37:32.851756   99930 provision.go:86] duration metric: configureAuth took 499.489894ms
	I1212 22:37:32.851785   99930 buildroot.go:189] setting minikube options for container-runtime
	I1212 22:37:32.852013   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:37:32.852106   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	I1212 22:37:32.854929   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.855390   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:37:32.855420   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:37:32.855639   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHPort
	I1212 22:37:32.855858   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:37:32.856069   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:37:32.856275   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHUsername
	I1212 22:37:32.856521   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:37:32.856936   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1212 22:37:32.856965   99930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:39:03.342437   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:39:03.342473   99930 machine.go:91] provisioned docker machine in 1m31.257382649s
	I1212 22:39:03.342485   99930 start.go:300] post-start starting for "multinode-054207-m03" (driver="kvm2")
	I1212 22:39:03.342495   99930 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:39:03.342512   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .DriverName
	I1212 22:39:03.342920   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:39:03.342978   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	I1212 22:39:03.346560   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.347046   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:39:03.347082   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.347314   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHPort
	I1212 22:39:03.347545   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:39:03.347758   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHUsername
	I1212 22:39:03.347932   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m03/id_rsa Username:docker}
	I1212 22:39:03.437541   99930 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:39:03.442258   99930 command_runner.go:130] > NAME=Buildroot
	I1212 22:39:03.442287   99930 command_runner.go:130] > VERSION=2021.02.12-1-g161fa11-dirty
	I1212 22:39:03.442295   99930 command_runner.go:130] > ID=buildroot
	I1212 22:39:03.442306   99930 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 22:39:03.442314   99930 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 22:39:03.442357   99930 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 22:39:03.442378   99930 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 22:39:03.442474   99930 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 22:39:03.442589   99930 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 22:39:03.442615   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /etc/ssl/certs/838252.pem
	I1212 22:39:03.442732   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:39:03.451730   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:39:03.476978   99930 start.go:303] post-start completed in 134.477227ms
	I1212 22:39:03.477003   99930 fix.go:56] fixHost completed within 1m31.413585687s
	I1212 22:39:03.477026   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	I1212 22:39:03.479808   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.480180   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:39:03.480218   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.480363   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHPort
	I1212 22:39:03.480588   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:39:03.480752   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:39:03.480926   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHUsername
	I1212 22:39:03.481094   99930 main.go:141] libmachine: Using SSH client type: native
	I1212 22:39:03.481409   99930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1212 22:39:03.481421   99930 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 22:39:03.604226   99930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702420743.588768345
	
	I1212 22:39:03.604251   99930 fix.go:206] guest clock: 1702420743.588768345
	I1212 22:39:03.604261   99930 fix.go:219] Guest: 2023-12-12 22:39:03.588768345 +0000 UTC Remote: 2023-12-12 22:39:03.477007181 +0000 UTC m=+554.237243504 (delta=111.761164ms)
	I1212 22:39:03.604281   99930 fix.go:190] guest clock delta is within tolerance: 111.761164ms
	I1212 22:39:03.604287   99930 start.go:83] releasing machines lock for "multinode-054207-m03", held for 1m31.540880011s
	I1212 22:39:03.604315   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .DriverName
	I1212 22:39:03.604658   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetIP
	I1212 22:39:03.607500   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.607895   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:39:03.607925   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.610051   99930 out.go:177] * Found network options:
	I1212 22:39:03.611454   99930 out.go:177]   - NO_PROXY=192.168.39.172,192.168.39.15
	W1212 22:39:03.612839   99930 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 22:39:03.612859   99930 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 22:39:03.612872   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .DriverName
	I1212 22:39:03.613508   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .DriverName
	I1212 22:39:03.613697   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .DriverName
	I1212 22:39:03.613788   99930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:39:03.613823   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	W1212 22:39:03.613915   99930 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 22:39:03.613937   99930 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 22:39:03.614013   99930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:39:03.614027   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHHostname
	I1212 22:39:03.616598   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.616942   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.616987   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:39:03.617009   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.617155   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHPort
	I1212 22:39:03.617342   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:39:03.617485   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:39:03.617506   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:03.617536   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHUsername
	I1212 22:39:03.617662   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHPort
	I1212 22:39:03.617739   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m03/id_rsa Username:docker}
	I1212 22:39:03.617839   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHKeyPath
	I1212 22:39:03.617983   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetSSHUsername
	I1212 22:39:03.618094   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m03/id_rsa Username:docker}
	I1212 22:39:03.729426   99930 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 22:39:03.851232   99930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:39:03.857582   99930 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 22:39:03.857866   99930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 22:39:03.857932   99930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:39:03.867690   99930 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 22:39:03.867717   99930 start.go:475] detecting cgroup driver to use...
	I1212 22:39:03.867780   99930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:39:03.883760   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:39:03.896454   99930 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:39:03.896510   99930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:39:03.913537   99930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:39:03.927743   99930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:39:04.083865   99930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:39:04.227306   99930 docker.go:219] disabling docker service ...
	I1212 22:39:04.227383   99930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:39:04.244703   99930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:39:04.258128   99930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:39:04.404513   99930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:39:04.543394   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:39:04.557421   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:39:04.578380   99930 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 22:39:04.578424   99930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:39:04.578483   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:39:04.589618   99930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:39:04.589684   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:39:04.600559   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:39:04.611495   99930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:39:04.621814   99930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:39:04.632754   99930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:39:04.642085   99930 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 22:39:04.642198   99930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:39:04.651488   99930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:39:04.784762   99930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:39:05.037336   99930 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:39:05.037436   99930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:39:05.043061   99930 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 22:39:05.043124   99930 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 22:39:05.043142   99930 command_runner.go:130] > Device: 16h/22d	Inode: 1178        Links: 1
	I1212 22:39:05.043157   99930 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:39:05.043170   99930 command_runner.go:130] > Access: 2023-12-12 22:39:04.946943936 +0000
	I1212 22:39:05.043182   99930 command_runner.go:130] > Modify: 2023-12-12 22:39:04.946943936 +0000
	I1212 22:39:05.043188   99930 command_runner.go:130] > Change: 2023-12-12 22:39:04.946943936 +0000
	I1212 22:39:05.043195   99930 command_runner.go:130] >  Birth: -
	I1212 22:39:05.043222   99930 start.go:543] Will wait 60s for crictl version
	I1212 22:39:05.043304   99930 ssh_runner.go:195] Run: which crictl
	I1212 22:39:05.048179   99930 command_runner.go:130] > /usr/bin/crictl
	I1212 22:39:05.048269   99930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:39:05.094161   99930 command_runner.go:130] > Version:  0.1.0
	I1212 22:39:05.094215   99930 command_runner.go:130] > RuntimeName:  cri-o
	I1212 22:39:05.094238   99930 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1212 22:39:05.094251   99930 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 22:39:05.095978   99930 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 22:39:05.096077   99930 ssh_runner.go:195] Run: crio --version
	I1212 22:39:05.151559   99930 command_runner.go:130] > crio version 1.24.1
	I1212 22:39:05.151592   99930 command_runner.go:130] > Version:          1.24.1
	I1212 22:39:05.151602   99930 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:39:05.151609   99930 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:39:05.151619   99930 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:39:05.151628   99930 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:39:05.151633   99930 command_runner.go:130] > Compiler:         gc
	I1212 22:39:05.151641   99930 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:39:05.151650   99930 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:39:05.151668   99930 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:39:05.151680   99930 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:39:05.151689   99930 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:39:05.151776   99930 ssh_runner.go:195] Run: crio --version
	I1212 22:39:05.202304   99930 command_runner.go:130] > crio version 1.24.1
	I1212 22:39:05.202330   99930 command_runner.go:130] > Version:          1.24.1
	I1212 22:39:05.202337   99930 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1212 22:39:05.202342   99930 command_runner.go:130] > GitTreeState:     dirty
	I1212 22:39:05.202349   99930 command_runner.go:130] > BuildDate:        2023-12-12T19:20:53Z
	I1212 22:39:05.202353   99930 command_runner.go:130] > GoVersion:        go1.19.9
	I1212 22:39:05.202357   99930 command_runner.go:130] > Compiler:         gc
	I1212 22:39:05.202362   99930 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:39:05.202371   99930 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:39:05.202381   99930 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:39:05.202386   99930 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:39:05.202390   99930 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:39:05.207519   99930 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 22:39:05.209096   99930 out.go:177]   - env NO_PROXY=192.168.39.172
	I1212 22:39:05.210625   99930 out.go:177]   - env NO_PROXY=192.168.39.172,192.168.39.15
	I1212 22:39:05.212068   99930 main.go:141] libmachine: (multinode-054207-m03) Calling .GetIP
	I1212 22:39:05.214994   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:05.215414   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:40:46", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:27:30 +0000 UTC Type:0 Mac:52:54:00:50:40:46 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-054207-m03 Clientid:01:52:54:00:50:40:46}
	I1212 22:39:05.215446   99930 main.go:141] libmachine: (multinode-054207-m03) DBG | domain multinode-054207-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:50:40:46 in network mk-multinode-054207
	I1212 22:39:05.215628   99930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 22:39:05.220066   99930 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1212 22:39:05.220121   99930 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207 for IP: 192.168.39.48
	I1212 22:39:05.220146   99930 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:39:05.220328   99930 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 22:39:05.220377   99930 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 22:39:05.220394   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:39:05.220415   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:39:05.220432   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:39:05.220448   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:39:05.220518   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 22:39:05.220557   99930 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 22:39:05.220573   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 22:39:05.220608   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:39:05.220639   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:39:05.220673   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 22:39:05.220728   99930 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 22:39:05.220766   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> /usr/share/ca-certificates/838252.pem
	I1212 22:39:05.220786   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:39:05.220801   99930 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem -> /usr/share/ca-certificates/83825.pem
	I1212 22:39:05.221297   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:39:05.246582   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 22:39:05.271153   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:39:05.297726   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 22:39:05.322331   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 22:39:05.348249   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:39:05.373683   99930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 22:39:05.399335   99930 ssh_runner.go:195] Run: openssl version
	I1212 22:39:05.406046   99930 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 22:39:05.406128   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:39:05.417281   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:39:05.421892   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:39:05.422012   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:39:05.422087   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:39:05.427228   99930 command_runner.go:130] > b5213941
	I1212 22:39:05.427505   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:39:05.436169   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 22:39:05.445957   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 22:39:05.450432   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:39:05.450457   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 22:39:05.450501   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 22:39:05.455866   99930 command_runner.go:130] > 51391683
	I1212 22:39:05.456175   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 22:39:05.465119   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 22:39:05.478065   99930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 22:39:05.483910   99930 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:39:05.483952   99930 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 22:39:05.484001   99930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 22:39:05.489700   99930 command_runner.go:130] > 3ec20f2e
	I1212 22:39:05.489788   99930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:39:05.498926   99930 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:39:05.502866   99930 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:39:05.502909   99930 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:39:05.503021   99930 ssh_runner.go:195] Run: crio config
	I1212 22:39:05.557965   99930 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 22:39:05.558006   99930 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 22:39:05.558017   99930 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 22:39:05.558023   99930 command_runner.go:130] > #
	I1212 22:39:05.558033   99930 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 22:39:05.558043   99930 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 22:39:05.558053   99930 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 22:39:05.558065   99930 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 22:39:05.558075   99930 command_runner.go:130] > # reload'.
	I1212 22:39:05.558086   99930 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 22:39:05.558096   99930 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 22:39:05.558107   99930 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 22:39:05.558116   99930 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 22:39:05.558133   99930 command_runner.go:130] > [crio]
	I1212 22:39:05.558143   99930 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 22:39:05.558152   99930 command_runner.go:130] > # containers images, in this directory.
	I1212 22:39:05.558192   99930 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 22:39:05.558213   99930 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 22:39:05.558221   99930 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 22:39:05.558230   99930 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 22:39:05.558240   99930 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 22:39:05.558252   99930 command_runner.go:130] > storage_driver = "overlay"
	I1212 22:39:05.558261   99930 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 22:39:05.558270   99930 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 22:39:05.558281   99930 command_runner.go:130] > storage_option = [
	I1212 22:39:05.558314   99930 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 22:39:05.558451   99930 command_runner.go:130] > ]
	I1212 22:39:05.558469   99930 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 22:39:05.558479   99930 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 22:39:05.559006   99930 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 22:39:05.559026   99930 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 22:39:05.559038   99930 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 22:39:05.559045   99930 command_runner.go:130] > # always happen on a node reboot
	I1212 22:39:05.559522   99930 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 22:39:05.559542   99930 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 22:39:05.559553   99930 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 22:39:05.559569   99930 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 22:39:05.560114   99930 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 22:39:05.560130   99930 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 22:39:05.560145   99930 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 22:39:05.560590   99930 command_runner.go:130] > # internal_wipe = true
	I1212 22:39:05.560601   99930 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 22:39:05.560612   99930 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 22:39:05.560625   99930 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 22:39:05.561091   99930 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 22:39:05.561104   99930 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 22:39:05.561110   99930 command_runner.go:130] > [crio.api]
	I1212 22:39:05.561123   99930 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 22:39:05.561527   99930 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 22:39:05.561542   99930 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 22:39:05.562049   99930 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 22:39:05.562069   99930 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 22:39:05.562077   99930 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 22:39:05.562526   99930 command_runner.go:130] > # stream_port = "0"
	I1212 22:39:05.562537   99930 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 22:39:05.563066   99930 command_runner.go:130] > # stream_enable_tls = false
	I1212 22:39:05.563076   99930 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 22:39:05.563937   99930 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 22:39:05.563952   99930 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 22:39:05.563959   99930 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 22:39:05.563963   99930 command_runner.go:130] > # minutes.
	I1212 22:39:05.563967   99930 command_runner.go:130] > # stream_tls_cert = ""
	I1212 22:39:05.563979   99930 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 22:39:05.563992   99930 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 22:39:05.564001   99930 command_runner.go:130] > # stream_tls_key = ""
	I1212 22:39:05.564011   99930 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 22:39:05.564019   99930 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 22:39:05.564025   99930 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 22:39:05.564030   99930 command_runner.go:130] > # stream_tls_ca = ""
	I1212 22:39:05.564037   99930 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:39:05.564042   99930 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 22:39:05.564053   99930 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:39:05.564057   99930 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 22:39:05.564071   99930 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 22:39:05.564085   99930 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 22:39:05.564092   99930 command_runner.go:130] > [crio.runtime]
	I1212 22:39:05.564106   99930 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 22:39:05.564115   99930 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 22:39:05.564120   99930 command_runner.go:130] > # "nofile=1024:2048"
	I1212 22:39:05.564127   99930 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 22:39:05.564131   99930 command_runner.go:130] > # default_ulimits = [
	I1212 22:39:05.564137   99930 command_runner.go:130] > # ]
	I1212 22:39:05.564143   99930 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 22:39:05.564149   99930 command_runner.go:130] > # no_pivot = false
	I1212 22:39:05.564154   99930 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 22:39:05.564164   99930 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 22:39:05.564175   99930 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 22:39:05.564186   99930 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 22:39:05.564195   99930 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 22:39:05.564209   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:39:05.564220   99930 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 22:39:05.564226   99930 command_runner.go:130] > # Cgroup setting for conmon
	I1212 22:39:05.564235   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 22:39:05.564240   99930 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 22:39:05.564246   99930 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 22:39:05.564253   99930 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 22:39:05.564260   99930 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:39:05.564270   99930 command_runner.go:130] > conmon_env = [
	I1212 22:39:05.564281   99930 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 22:39:05.564289   99930 command_runner.go:130] > ]
	I1212 22:39:05.564298   99930 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 22:39:05.564311   99930 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 22:39:05.564323   99930 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 22:39:05.564331   99930 command_runner.go:130] > # default_env = [
	I1212 22:39:05.564335   99930 command_runner.go:130] > # ]
	I1212 22:39:05.564343   99930 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 22:39:05.564347   99930 command_runner.go:130] > # selinux = false
	I1212 22:39:05.564361   99930 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 22:39:05.564375   99930 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 22:39:05.564385   99930 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 22:39:05.564395   99930 command_runner.go:130] > # seccomp_profile = ""
	I1212 22:39:05.564405   99930 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 22:39:05.564417   99930 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 22:39:05.564453   99930 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 22:39:05.564466   99930 command_runner.go:130] > # which might increase security.
	I1212 22:39:05.564474   99930 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 22:39:05.564485   99930 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 22:39:05.564499   99930 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 22:39:05.564512   99930 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 22:39:05.564526   99930 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 22:39:05.564538   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:39:05.564546   99930 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 22:39:05.564553   99930 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 22:39:05.564564   99930 command_runner.go:130] > # the cgroup blockio controller.
	I1212 22:39:05.564572   99930 command_runner.go:130] > # blockio_config_file = ""
	I1212 22:39:05.564584   99930 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 22:39:05.564594   99930 command_runner.go:130] > # irqbalance daemon.
	I1212 22:39:05.564605   99930 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 22:39:05.564618   99930 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 22:39:05.564630   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:39:05.564639   99930 command_runner.go:130] > # rdt_config_file = ""
	I1212 22:39:05.564645   99930 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 22:39:05.564654   99930 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 22:39:05.564669   99930 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 22:39:05.564682   99930 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 22:39:05.564696   99930 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 22:39:05.564709   99930 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 22:39:05.564719   99930 command_runner.go:130] > # will be added.
	I1212 22:39:05.564726   99930 command_runner.go:130] > # default_capabilities = [
	I1212 22:39:05.564732   99930 command_runner.go:130] > # 	"CHOWN",
	I1212 22:39:05.564737   99930 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 22:39:05.564743   99930 command_runner.go:130] > # 	"FSETID",
	I1212 22:39:05.564752   99930 command_runner.go:130] > # 	"FOWNER",
	I1212 22:39:05.564759   99930 command_runner.go:130] > # 	"SETGID",
	I1212 22:39:05.564766   99930 command_runner.go:130] > # 	"SETUID",
	I1212 22:39:05.564776   99930 command_runner.go:130] > # 	"SETPCAP",
	I1212 22:39:05.564783   99930 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 22:39:05.564793   99930 command_runner.go:130] > # 	"KILL",
	I1212 22:39:05.564804   99930 command_runner.go:130] > # ]
	I1212 22:39:05.564819   99930 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 22:39:05.564833   99930 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:39:05.564843   99930 command_runner.go:130] > # default_sysctls = [
	I1212 22:39:05.564851   99930 command_runner.go:130] > # ]
	I1212 22:39:05.564861   99930 command_runner.go:130] > # List of devices on the host that a
	I1212 22:39:05.564875   99930 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 22:39:05.564885   99930 command_runner.go:130] > # allowed_devices = [
	I1212 22:39:05.564892   99930 command_runner.go:130] > # 	"/dev/fuse",
	I1212 22:39:05.564901   99930 command_runner.go:130] > # ]
	I1212 22:39:05.564910   99930 command_runner.go:130] > # List of additional devices. specified as
	I1212 22:39:05.564926   99930 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 22:39:05.564937   99930 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 22:39:05.564963   99930 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:39:05.564971   99930 command_runner.go:130] > # additional_devices = [
	I1212 22:39:05.564976   99930 command_runner.go:130] > # ]
	I1212 22:39:05.564987   99930 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 22:39:05.564994   99930 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 22:39:05.565004   99930 command_runner.go:130] > # 	"/etc/cdi",
	I1212 22:39:05.565010   99930 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 22:39:05.565024   99930 command_runner.go:130] > # ]
	I1212 22:39:05.565038   99930 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 22:39:05.565051   99930 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 22:39:05.565061   99930 command_runner.go:130] > # Defaults to false.
	I1212 22:39:05.565075   99930 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 22:39:05.565085   99930 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 22:39:05.565098   99930 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 22:39:05.565108   99930 command_runner.go:130] > # hooks_dir = [
	I1212 22:39:05.565117   99930 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 22:39:05.565125   99930 command_runner.go:130] > # ]
	I1212 22:39:05.565136   99930 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 22:39:05.565149   99930 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 22:39:05.565157   99930 command_runner.go:130] > # its default mounts from the following two files:
	I1212 22:39:05.565160   99930 command_runner.go:130] > #
	I1212 22:39:05.565167   99930 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 22:39:05.565175   99930 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 22:39:05.565181   99930 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 22:39:05.565187   99930 command_runner.go:130] > #
	I1212 22:39:05.565193   99930 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 22:39:05.565202   99930 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 22:39:05.565209   99930 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 22:39:05.565217   99930 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 22:39:05.565220   99930 command_runner.go:130] > #
	I1212 22:39:05.565225   99930 command_runner.go:130] > # default_mounts_file = ""
	I1212 22:39:05.565233   99930 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 22:39:05.565239   99930 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 22:39:05.565245   99930 command_runner.go:130] > pids_limit = 1024
	I1212 22:39:05.565252   99930 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 22:39:05.565258   99930 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 22:39:05.565267   99930 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 22:39:05.565275   99930 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 22:39:05.565282   99930 command_runner.go:130] > # log_size_max = -1
	I1212 22:39:05.565305   99930 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 22:39:05.565314   99930 command_runner.go:130] > # log_to_journald = false
	I1212 22:39:05.565342   99930 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 22:39:05.565359   99930 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 22:39:05.565369   99930 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 22:39:05.565375   99930 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 22:39:05.565385   99930 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 22:39:05.565389   99930 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 22:39:05.565403   99930 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 22:39:05.565409   99930 command_runner.go:130] > # read_only = false
	I1212 22:39:05.565416   99930 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 22:39:05.565424   99930 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 22:39:05.565429   99930 command_runner.go:130] > # live configuration reload.
	I1212 22:39:05.565435   99930 command_runner.go:130] > # log_level = "info"
	I1212 22:39:05.565441   99930 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 22:39:05.565449   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:39:05.565453   99930 command_runner.go:130] > # log_filter = ""
	I1212 22:39:05.565461   99930 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 22:39:05.565468   99930 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 22:39:05.565472   99930 command_runner.go:130] > # separated by comma.
	I1212 22:39:05.565476   99930 command_runner.go:130] > # uid_mappings = ""
	I1212 22:39:05.565482   99930 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 22:39:05.565490   99930 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 22:39:05.565495   99930 command_runner.go:130] > # separated by comma.
	I1212 22:39:05.565501   99930 command_runner.go:130] > # gid_mappings = ""
	I1212 22:39:05.565507   99930 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 22:39:05.565514   99930 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:39:05.565522   99930 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:39:05.565526   99930 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 22:39:05.565534   99930 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 22:39:05.565540   99930 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:39:05.565548   99930 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:39:05.565553   99930 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 22:39:05.565562   99930 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 22:39:05.565568   99930 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 22:39:05.565579   99930 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 22:39:05.565584   99930 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 22:39:05.565590   99930 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 22:39:05.565598   99930 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 22:39:05.565603   99930 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 22:39:05.565609   99930 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 22:39:05.565613   99930 command_runner.go:130] > drop_infra_ctr = false
	I1212 22:39:05.565622   99930 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 22:39:05.565628   99930 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 22:39:05.565637   99930 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 22:39:05.565641   99930 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 22:39:05.565647   99930 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 22:39:05.565655   99930 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 22:39:05.565659   99930 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 22:39:05.565668   99930 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 22:39:05.565672   99930 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 22:39:05.565681   99930 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 22:39:05.565687   99930 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 22:39:05.565695   99930 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 22:39:05.565699   99930 command_runner.go:130] > # default_runtime = "runc"
	I1212 22:39:05.565706   99930 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 22:39:05.565714   99930 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 22:39:05.565725   99930 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 22:39:05.565730   99930 command_runner.go:130] > # creation as a file is not desired either.
	I1212 22:39:05.565740   99930 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 22:39:05.565745   99930 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 22:39:05.565750   99930 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 22:39:05.565754   99930 command_runner.go:130] > # ]
	I1212 22:39:05.565760   99930 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 22:39:05.565769   99930 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 22:39:05.565775   99930 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 22:39:05.565783   99930 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 22:39:05.565787   99930 command_runner.go:130] > #
	I1212 22:39:05.565793   99930 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 22:39:05.565802   99930 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 22:39:05.565809   99930 command_runner.go:130] > #  runtime_type = "oci"
	I1212 22:39:05.565814   99930 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 22:39:05.565820   99930 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 22:39:05.565827   99930 command_runner.go:130] > #  allowed_annotations = []
	I1212 22:39:05.565831   99930 command_runner.go:130] > # Where:
	I1212 22:39:05.565837   99930 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 22:39:05.565843   99930 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 22:39:05.565869   99930 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 22:39:05.565883   99930 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 22:39:05.565891   99930 command_runner.go:130] > #   in $PATH.
	I1212 22:39:05.565898   99930 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 22:39:05.565905   99930 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 22:39:05.565912   99930 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 22:39:05.565918   99930 command_runner.go:130] > #   state.
	I1212 22:39:05.565924   99930 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 22:39:05.565932   99930 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 22:39:05.565938   99930 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 22:39:05.565948   99930 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 22:39:05.565961   99930 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 22:39:05.565975   99930 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 22:39:05.565986   99930 command_runner.go:130] > #   The currently recognized values are:
	I1212 22:39:05.565996   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 22:39:05.566004   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 22:39:05.566012   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 22:39:05.566018   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 22:39:05.566028   99930 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 22:39:05.566035   99930 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 22:39:05.566043   99930 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 22:39:05.566050   99930 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 22:39:05.566057   99930 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 22:39:05.566063   99930 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 22:39:05.566070   99930 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 22:39:05.566074   99930 command_runner.go:130] > runtime_type = "oci"
	I1212 22:39:05.566081   99930 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 22:39:05.566086   99930 command_runner.go:130] > runtime_config_path = ""
	I1212 22:39:05.566093   99930 command_runner.go:130] > monitor_path = ""
	I1212 22:39:05.566102   99930 command_runner.go:130] > monitor_cgroup = ""
	I1212 22:39:05.566109   99930 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 22:39:05.566122   99930 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 22:39:05.566132   99930 command_runner.go:130] > # running containers
	I1212 22:39:05.566138   99930 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 22:39:05.566152   99930 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 22:39:05.566185   99930 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 22:39:05.566206   99930 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 22:39:05.566214   99930 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 22:39:05.566222   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 22:39:05.566233   99930 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 22:39:05.566241   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 22:39:05.566252   99930 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 22:39:05.566261   99930 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 22:39:05.566272   99930 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 22:39:05.566284   99930 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 22:39:05.566297   99930 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 22:39:05.566311   99930 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 22:39:05.566327   99930 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 22:39:05.566340   99930 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 22:39:05.566352   99930 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 22:39:05.566362   99930 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 22:39:05.566370   99930 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 22:39:05.566378   99930 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 22:39:05.566384   99930 command_runner.go:130] > # Example:
	I1212 22:39:05.566388   99930 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 22:39:05.566394   99930 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 22:39:05.566399   99930 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 22:39:05.566405   99930 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 22:39:05.566409   99930 command_runner.go:130] > # cpuset = 0
	I1212 22:39:05.566414   99930 command_runner.go:130] > # cpushares = "0-1"
	I1212 22:39:05.566418   99930 command_runner.go:130] > # Where:
	I1212 22:39:05.566424   99930 command_runner.go:130] > # The workload name is workload-type.
	I1212 22:39:05.566433   99930 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 22:39:05.566439   99930 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 22:39:05.566447   99930 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 22:39:05.566455   99930 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 22:39:05.566466   99930 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 22:39:05.566472   99930 command_runner.go:130] > # 
	I1212 22:39:05.566501   99930 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 22:39:05.566509   99930 command_runner.go:130] > #
	I1212 22:39:05.566518   99930 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 22:39:05.566532   99930 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 22:39:05.566546   99930 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 22:39:05.566559   99930 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 22:39:05.566571   99930 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 22:39:05.566577   99930 command_runner.go:130] > [crio.image]
	I1212 22:39:05.566584   99930 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 22:39:05.566591   99930 command_runner.go:130] > # default_transport = "docker://"
	I1212 22:39:05.566597   99930 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 22:39:05.566606   99930 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:39:05.566610   99930 command_runner.go:130] > # global_auth_file = ""
	I1212 22:39:05.566618   99930 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 22:39:05.566623   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:39:05.566630   99930 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 22:39:05.566637   99930 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 22:39:05.566645   99930 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:39:05.566650   99930 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:39:05.566657   99930 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 22:39:05.566662   99930 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 22:39:05.566670   99930 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 22:39:05.566676   99930 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 22:39:05.566684   99930 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 22:39:05.566689   99930 command_runner.go:130] > # pause_command = "/pause"
	I1212 22:39:05.566697   99930 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 22:39:05.566703   99930 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 22:39:05.566711   99930 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 22:39:05.566717   99930 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 22:39:05.566725   99930 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 22:39:05.566728   99930 command_runner.go:130] > # signature_policy = ""
	I1212 22:39:05.566734   99930 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 22:39:05.566742   99930 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 22:39:05.566746   99930 command_runner.go:130] > # changing them here.
	I1212 22:39:05.566760   99930 command_runner.go:130] > # insecure_registries = [
	I1212 22:39:05.566766   99930 command_runner.go:130] > # ]
	I1212 22:39:05.566773   99930 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 22:39:05.566784   99930 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 22:39:05.566794   99930 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 22:39:05.566810   99930 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 22:39:05.566820   99930 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 22:39:05.566833   99930 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 22:39:05.566843   99930 command_runner.go:130] > # CNI plugins.
	I1212 22:39:05.566849   99930 command_runner.go:130] > [crio.network]
	I1212 22:39:05.566861   99930 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 22:39:05.566871   99930 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 22:39:05.566877   99930 command_runner.go:130] > # cni_default_network = ""
	I1212 22:39:05.566888   99930 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 22:39:05.566897   99930 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 22:39:05.566904   99930 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 22:39:05.566913   99930 command_runner.go:130] > # plugin_dirs = [
	I1212 22:39:05.566919   99930 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 22:39:05.566927   99930 command_runner.go:130] > # ]
	I1212 22:39:05.566936   99930 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 22:39:05.566945   99930 command_runner.go:130] > [crio.metrics]
	I1212 22:39:05.566952   99930 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 22:39:05.566962   99930 command_runner.go:130] > enable_metrics = true
	I1212 22:39:05.566969   99930 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 22:39:05.566979   99930 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 22:39:05.566993   99930 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 22:39:05.567008   99930 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 22:39:05.567020   99930 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 22:39:05.567029   99930 command_runner.go:130] > # metrics_collectors = [
	I1212 22:39:05.567034   99930 command_runner.go:130] > # 	"operations",
	I1212 22:39:05.567039   99930 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 22:39:05.567047   99930 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 22:39:05.567051   99930 command_runner.go:130] > # 	"operations_errors",
	I1212 22:39:05.567058   99930 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 22:39:05.567063   99930 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 22:39:05.567069   99930 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 22:39:05.567075   99930 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 22:39:05.567081   99930 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 22:39:05.567086   99930 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 22:39:05.567092   99930 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 22:39:05.567099   99930 command_runner.go:130] > # 	"containers_oom_total",
	I1212 22:39:05.567108   99930 command_runner.go:130] > # 	"containers_oom",
	I1212 22:39:05.567117   99930 command_runner.go:130] > # 	"processes_defunct",
	I1212 22:39:05.567126   99930 command_runner.go:130] > # 	"operations_total",
	I1212 22:39:05.567136   99930 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 22:39:05.567143   99930 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 22:39:05.567154   99930 command_runner.go:130] > # 	"operations_errors_total",
	I1212 22:39:05.567161   99930 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 22:39:05.567172   99930 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 22:39:05.567180   99930 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 22:39:05.567191   99930 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 22:39:05.567201   99930 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 22:39:05.567209   99930 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 22:39:05.567218   99930 command_runner.go:130] > # ]
	I1212 22:39:05.567227   99930 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 22:39:05.567248   99930 command_runner.go:130] > # metrics_port = 9090
	I1212 22:39:05.567257   99930 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 22:39:05.567267   99930 command_runner.go:130] > # metrics_socket = ""
	I1212 22:39:05.567284   99930 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 22:39:05.567294   99930 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 22:39:05.567300   99930 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 22:39:05.567307   99930 command_runner.go:130] > # certificate on any modification event.
	I1212 22:39:05.567312   99930 command_runner.go:130] > # metrics_cert = ""
	I1212 22:39:05.567318   99930 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 22:39:05.567329   99930 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 22:39:05.567339   99930 command_runner.go:130] > # metrics_key = ""
	I1212 22:39:05.567353   99930 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 22:39:05.567362   99930 command_runner.go:130] > [crio.tracing]
	I1212 22:39:05.567372   99930 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 22:39:05.567382   99930 command_runner.go:130] > # enable_tracing = false
	I1212 22:39:05.567393   99930 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 22:39:05.567401   99930 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 22:39:05.567411   99930 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 22:39:05.567422   99930 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 22:39:05.567437   99930 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 22:39:05.567446   99930 command_runner.go:130] > [crio.stats]
	I1212 22:39:05.567460   99930 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 22:39:05.567472   99930 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 22:39:05.567483   99930 command_runner.go:130] > # stats_collection_period = 0
	I1212 22:39:05.567533   99930 command_runner.go:130] ! time="2023-12-12 22:39:05.539919916Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1212 22:39:05.567558   99930 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 22:39:05.567637   99930 cni.go:84] Creating CNI manager for ""
	I1212 22:39:05.567649   99930 cni.go:136] 3 nodes found, recommending kindnet
	I1212 22:39:05.567661   99930 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:39:05.567682   99930 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-054207 NodeName:multinode-054207-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:39:05.567884   99930 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-054207-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:39:05.567969   99930 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-054207-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:39:05.568032   99930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:39:05.576858   99930 command_runner.go:130] > kubeadm
	I1212 22:39:05.576878   99930 command_runner.go:130] > kubectl
	I1212 22:39:05.576884   99930 command_runner.go:130] > kubelet
	I1212 22:39:05.577238   99930 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:39:05.577307   99930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 22:39:05.586380   99930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1212 22:39:05.604099   99930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:39:05.618947   99930 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1212 22:39:05.622605   99930 command_runner.go:130] > 192.168.39.172	control-plane.minikube.internal
	I1212 22:39:05.622872   99930 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:39:05.623181   99930 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:39:05.623261   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:39:05.623307   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:39:05.638080   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I1212 22:39:05.638513   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:39:05.639010   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:39:05.639030   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:39:05.639331   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:39:05.639548   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:39:05.639689   99930 start.go:304] JoinCluster: &{Name:multinode-054207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-054207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.15 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:39:05.639824   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 22:39:05.639842   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:39:05.642366   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:39:05.642783   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:39:05.642812   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:39:05.642949   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:39:05.643102   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:39:05.643258   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:39:05.643374   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:39:05.830406   99930 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token e24fwm.2qvcczvop6s6wk1s --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 22:39:05.830494   99930 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 22:39:05.830551   99930 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:39:05.830891   99930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:39:05.830942   99930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:39:05.845487   99930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36003
	I1212 22:39:05.845913   99930 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:39:05.846338   99930 main.go:141] libmachine: Using API Version  1
	I1212 22:39:05.846357   99930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:39:05.846690   99930 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:39:05.846890   99930 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:39:05.847095   99930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-054207-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1212 22:39:05.847122   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:39:05.850077   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:39:05.850543   99930 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:39:05.850578   99930 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:39:05.850717   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:39:05.850912   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:39:05.851051   99930 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:39:05.851217   99930 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:39:06.087828   99930 command_runner.go:130] > node/multinode-054207-m03 cordoned
	I1212 22:39:09.121155   99930 command_runner.go:130] > pod "busybox-5bc68d56bd-nv24k" has DeletionTimestamp older than 1 seconds, skipping
	I1212 22:39:09.121189   99930 command_runner.go:130] > node/multinode-054207-m03 drained
	I1212 22:39:09.123393   99930 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1212 22:39:09.123414   99930 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-mth9w, kube-system/kube-proxy-xfhnh
	I1212 22:39:09.123442   99930 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-054207-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.276319236s)
	I1212 22:39:09.123463   99930 node.go:108] successfully drained node "m03"
	I1212 22:39:09.123911   99930 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:39:09.124235   99930 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:39:09.124572   99930 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1212 22:39:09.124668   99930 round_trippers.go:463] DELETE https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:39:09.124682   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:09.124693   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:09.124703   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:09.124712   99930 round_trippers.go:473]     Content-Type: application/json
	I1212 22:39:09.147163   99930 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1212 22:39:09.147196   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:09.147207   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:09 GMT
	I1212 22:39:09.147216   99930 round_trippers.go:580]     Audit-Id: 334e9284-2160-491d-94d5-9a5e83521fd5
	I1212 22:39:09.147224   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:09.147232   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:09.147252   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:09.147261   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:09.147273   99930 round_trippers.go:580]     Content-Length: 171
	I1212 22:39:09.147304   99930 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-054207-m03","kind":"nodes","uid":"b0e92539-35e0-4df7-a26b-9c088375b04e"}}
	I1212 22:39:09.147366   99930 node.go:124] successfully deleted node "m03"
	I1212 22:39:09.147393   99930 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 22:39:09.147422   99930 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 22:39:09.147448   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e24fwm.2qvcczvop6s6wk1s --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-054207-m03"
	I1212 22:39:09.250667   99930 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 22:39:09.458914   99930 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 22:39:09.458954   99930 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 22:39:09.548920   99930 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:39:09.548953   99930 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:39:09.548963   99930 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 22:39:09.724952   99930 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 22:39:10.250649   99930 command_runner.go:130] > This node has joined the cluster:
	I1212 22:39:10.250682   99930 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 22:39:10.250692   99930 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 22:39:10.250701   99930 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 22:39:10.253420   99930 command_runner.go:130] ! W1212 22:39:09.234895    2372 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 22:39:10.253454   99930 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1212 22:39:10.253466   99930 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1212 22:39:10.253482   99930 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1212 22:39:10.253509   99930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e24fwm.2qvcczvop6s6wk1s --discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-054207-m03": (1.106038845s)
	I1212 22:39:10.253531   99930 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 22:39:10.504452   99930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-054207 minikube.k8s.io/updated_at=2023_12_12T22_39_10_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:39:10.606860   99930 command_runner.go:130] > node/multinode-054207-m02 labeled
	I1212 22:39:10.620396   99930 command_runner.go:130] > node/multinode-054207-m03 labeled
	I1212 22:39:10.622055   99930 start.go:306] JoinCluster complete in 4.982359583s
	I1212 22:39:10.622080   99930 cni.go:84] Creating CNI manager for ""
	I1212 22:39:10.622086   99930 cni.go:136] 3 nodes found, recommending kindnet
	I1212 22:39:10.622146   99930 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:39:10.628833   99930 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 22:39:10.628862   99930 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1212 22:39:10.628872   99930 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 22:39:10.628882   99930 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:39:10.628892   99930 command_runner.go:130] > Access: 2023-12-12 22:35:00.248811046 +0000
	I1212 22:39:10.628908   99930 command_runner.go:130] > Modify: 2023-12-12 19:27:49.000000000 +0000
	I1212 22:39:10.628916   99930 command_runner.go:130] > Change: 2023-12-12 22:34:58.322811046 +0000
	I1212 22:39:10.628922   99930 command_runner.go:130] >  Birth: -
	I1212 22:39:10.629010   99930 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:39:10.629025   99930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:39:10.650782   99930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:39:11.006201   99930 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:39:11.006235   99930 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:39:11.006244   99930 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 22:39:11.006251   99930 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 22:39:11.006865   99930 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:39:11.007108   99930 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:39:11.007462   99930 round_trippers.go:463] GET https://192.168.39.172:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:39:11.007477   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.007488   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.007497   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.010320   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:11.010339   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.010346   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.010355   99930 round_trippers.go:580]     Content-Length: 291
	I1212 22:39:11.010360   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.010365   99930 round_trippers.go:580]     Audit-Id: 1bac3e62-c586-41c4-9fae-2c0bd49428bf
	I1212 22:39:11.010372   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.010378   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.010387   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.010416   99930 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6f2af7e-14ec-48d1-9818-c77045ad4244","resourceVersion":"890","creationTimestamp":"2023-12-12T22:25:10Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 22:39:11.010522   99930 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-054207" context rescaled to 1 replicas
	I1212 22:39:11.010553   99930 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1212 22:39:11.012532   99930 out.go:177] * Verifying Kubernetes components...
	I1212 22:39:11.013835   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:39:11.028834   99930 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:39:11.029065   99930 kapi.go:59] client config for multinode-054207: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/profiles/multinode-054207/client.key", CAFile:"/home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:39:11.029361   99930 node_ready.go:35] waiting up to 6m0s for node "multinode-054207-m03" to be "Ready" ...
	I1212 22:39:11.029449   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:39:11.029459   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.029471   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.029484   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.032567   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:39:11.032585   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.032592   99930 round_trippers.go:580]     Audit-Id: a79a372d-8032-4bfd-af67-5d8e9102e5d3
	I1212 22:39:11.032597   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.032602   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.032607   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.032616   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.032621   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.033394   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m03","uid":"bb79169a-eb73-48b7-a8e6-bf071b93e227","resourceVersion":"1230","creationTimestamp":"2023-12-12T22:39:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_39_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:39:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I1212 22:39:11.033651   99930 node_ready.go:49] node "multinode-054207-m03" has status "Ready":"True"
	I1212 22:39:11.033664   99930 node_ready.go:38] duration metric: took 4.282957ms waiting for node "multinode-054207-m03" to be "Ready" ...
	I1212 22:39:11.033673   99930 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:39:11.033724   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods
	I1212 22:39:11.033732   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.033739   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.033745   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.037570   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:39:11.037586   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.037594   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.037600   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.037606   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.037615   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.037633   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.037649   99930 round_trippers.go:580]     Audit-Id: 7fbf37ae-c066-4a78-a991-a84a900a1413
	I1212 22:39:11.038655   99930 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1234"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"871","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82071 chars]
	I1212 22:39:11.042243   99930 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.042341   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rj4p4
	I1212 22:39:11.042353   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.042365   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.042376   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.044999   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:11.045023   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.045033   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.045042   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.045050   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.045058   99930 round_trippers.go:580]     Audit-Id: a178f911-864e-452d-9d89-360e18bac988
	I1212 22:39:11.045067   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.045078   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.045255   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rj4p4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8bd5cacb-68c8-41e5-a91e-07e6a9739897","resourceVersion":"871","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9efb03eb-fdff-4427-aa6f-74d01ca5220b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9efb03eb-fdff-4427-aa6f-74d01ca5220b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1212 22:39:11.045765   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:39:11.045782   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.045793   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.045807   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.048107   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:11.048130   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.048140   99930 round_trippers.go:580]     Audit-Id: bc3bcc3c-2ddf-4058-beff-6583a8b3949e
	I1212 22:39:11.048147   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.048157   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.048168   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.048176   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.048184   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.048444   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:39:11.048836   99930 pod_ready.go:92] pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace has status "Ready":"True"
	I1212 22:39:11.048854   99930 pod_ready.go:81] duration metric: took 6.58163ms waiting for pod "coredns-5dd5756b68-rj4p4" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.048866   99930 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.048931   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-054207
	I1212 22:39:11.048942   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.048952   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.048960   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.050960   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:39:11.050979   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.050987   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.050995   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.051002   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.051022   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.051030   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.051038   99930 round_trippers.go:580]     Audit-Id: cbfbe30c-c8bf-4979-b0f0-f904132c2621
	I1212 22:39:11.051315   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-054207","namespace":"kube-system","uid":"2c328cec-c2e2-49d1-85af-66899f444c90","resourceVersion":"891","creationTimestamp":"2023-12-12T22:25:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.172:2379","kubernetes.io/config.hash":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.mirror":"a8ea46a4c32716d2f532486a0df40c80","kubernetes.io/config.seen":"2023-12-12T22:25:01.374243786Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1212 22:39:11.051751   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:39:11.051767   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.051778   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.051787   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.053647   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:39:11.053666   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.053675   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.053684   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.053693   99930 round_trippers.go:580]     Audit-Id: 07fd4f20-ca8c-40c8-993f-18d6d7301073
	I1212 22:39:11.053700   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.053715   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.053723   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.053883   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:39:11.054246   99930 pod_ready.go:92] pod "etcd-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:39:11.054263   99930 pod_ready.go:81] duration metric: took 5.385109ms waiting for pod "etcd-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.054285   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.054348   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-054207
	I1212 22:39:11.054359   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.054369   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.054379   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.056651   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:11.056671   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.056681   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.056688   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.056695   99930 round_trippers.go:580]     Audit-Id: 414fa100-7c1d-48e0-ab4f-da6cb8b40cd4
	I1212 22:39:11.056717   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.056725   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.056733   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.056935   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-054207","namespace":"kube-system","uid":"70bc63a6-e544-401c-90ae-7473ce8343da","resourceVersion":"882","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.172:8443","kubernetes.io/config.hash":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.mirror":"767f78d84df6cc4b5db4cd1537aebe27","kubernetes.io/config.seen":"2023-12-12T22:25:10.498243509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1212 22:39:11.057400   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:39:11.057416   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.057427   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.057439   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.059513   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:11.059532   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.059541   99930 round_trippers.go:580]     Audit-Id: 2d249bdd-2d01-4bca-b3bd-c44c634052bf
	I1212 22:39:11.059549   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.059556   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.059563   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.059571   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.059579   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.059723   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:39:11.060109   99930 pod_ready.go:92] pod "kube-apiserver-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:39:11.060125   99930 pod_ready.go:81] duration metric: took 5.829755ms waiting for pod "kube-apiserver-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.060139   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.060192   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-054207
	I1212 22:39:11.060201   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.060210   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.060220   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.062624   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:11.062643   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.062652   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.062661   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.062671   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.062685   99930 round_trippers.go:580]     Audit-Id: 975b782b-0c70-4f15-b22a-f600c9a8514c
	I1212 22:39:11.062693   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.062704   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.062882   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-054207","namespace":"kube-system","uid":"9040c58b-7f77-4355-880f-991c010720f7","resourceVersion":"893","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.mirror":"9cec9887dcff7004aa4082a4b73fb6ba","kubernetes.io/config.seen":"2023-12-12T22:25:10.498244800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1212 22:39:11.063232   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:39:11.063258   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.063268   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.063279   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.064997   99930 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:39:11.065016   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.065026   99930 round_trippers.go:580]     Audit-Id: b1edc40b-0182-4d7e-9541-56f77ecef3c4
	I1212 22:39:11.065034   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.065042   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.065051   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.065061   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.065072   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.065316   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:39:11.065571   99930 pod_ready.go:92] pod "kube-controller-manager-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:39:11.065586   99930 pod_ready.go:81] duration metric: took 5.439031ms waiting for pod "kube-controller-manager-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.065600   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.230043   99930 request.go:629] Waited for 164.352613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:39:11.230126   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jtfmt
	I1212 22:39:11.230138   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.230150   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.230163   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.233155   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:11.233180   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.233191   99930 round_trippers.go:580]     Audit-Id: 82b5d510-b9f7-4f49-80c7-92e59f4ffbdc
	I1212 22:39:11.233198   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.233206   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.233214   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.233230   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.233241   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.233590   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jtfmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d38d8816-bb76-4b9d-aa24-33744ec196fa","resourceVersion":"1051","creationTimestamp":"2023-12-12T22:26:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1212 22:39:11.430512   99930 request.go:629] Waited for 196.400959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:39:11.430581   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m02
	I1212 22:39:11.430586   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.430594   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.430601   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.433518   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:11.433539   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.433546   99930 round_trippers.go:580]     Audit-Id: e506defd-c93e-4806-93f6-e3b3de270096
	I1212 22:39:11.433552   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.433560   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.433574   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.433582   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.433590   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.433772   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m02","uid":"255afb45-7963-4a1d-a2ad-72f01ff3d57e","resourceVersion":"1229","creationTimestamp":"2023-12-12T22:37:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_39_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:37:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1212 22:39:11.434143   99930 pod_ready.go:92] pod "kube-proxy-jtfmt" in "kube-system" namespace has status "Ready":"True"
	I1212 22:39:11.434162   99930 pod_ready.go:81] duration metric: took 368.553902ms waiting for pod "kube-proxy-jtfmt" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.434177   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.630445   99930 request.go:629] Waited for 196.192273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:39:11.630524   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rnx8m
	I1212 22:39:11.630530   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.630538   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.630547   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.633712   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:39:11.633743   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.633755   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.633764   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.633772   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.633780   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.633794   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.633809   99930 round_trippers.go:580]     Audit-Id: d2af6b0d-001e-4ed0-973b-42f403337cb0
	I1212 22:39:11.634054   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rnx8m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e8875d71-d50e-44f1-92c1-db1858b4b3bb","resourceVersion":"833","creationTimestamp":"2023-12-12T22:25:22Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1212 22:39:11.829829   99930 request.go:629] Waited for 195.32695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:39:11.829895   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:39:11.829901   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:11.829921   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:11.829936   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:11.833046   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:39:11.833077   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:11.833088   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:11 GMT
	I1212 22:39:11.833097   99930 round_trippers.go:580]     Audit-Id: 3283b969-7bbb-4812-892d-eb9296a6c8e8
	I1212 22:39:11.833105   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:11.833114   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:11.833121   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:11.833129   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:11.833441   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:39:11.833798   99930 pod_ready.go:92] pod "kube-proxy-rnx8m" in "kube-system" namespace has status "Ready":"True"
	I1212 22:39:11.833816   99930 pod_ready.go:81] duration metric: took 399.632689ms waiting for pod "kube-proxy-rnx8m" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:11.833828   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xfhnh" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:12.030320   99930 request.go:629] Waited for 196.40744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xfhnh
	I1212 22:39:12.030422   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xfhnh
	I1212 22:39:12.030431   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:12.030443   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:12.030454   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:12.033751   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:39:12.033775   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:12.033786   99930 round_trippers.go:580]     Audit-Id: 110ba974-f4a4-4944-b424-b97606bd36b1
	I1212 22:39:12.033793   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:12.033809   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:12.033818   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:12.033826   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:12.033838   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:12 GMT
	I1212 22:39:12.034027   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xfhnh","generateName":"kube-proxy-","namespace":"kube-system","uid":"2ca01f00-0c60-4a26-8baf-0718911a7f01","resourceVersion":"1243","creationTimestamp":"2023-12-12T22:26:53Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"183e7f1d-cfc5-4603-b469-bda53b362129","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:26:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"183e7f1d-cfc5-4603-b469-bda53b362129\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1212 22:39:12.229991   99930 request.go:629] Waited for 195.419971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:39:12.230084   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207-m03
	I1212 22:39:12.230098   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:12.230109   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:12.230119   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:12.232697   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:12.232716   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:12.232724   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:12.232729   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:12.232734   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:12.232739   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:12 GMT
	I1212 22:39:12.232744   99930 round_trippers.go:580]     Audit-Id: 49d6dbbe-1290-43b4-b5e5-5132458c5361
	I1212 22:39:12.232749   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:12.232908   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207-m03","uid":"bb79169a-eb73-48b7-a8e6-bf071b93e227","resourceVersion":"1230","creationTimestamp":"2023-12-12T22:39:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_39_10_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:39:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I1212 22:39:12.233196   99930 pod_ready.go:92] pod "kube-proxy-xfhnh" in "kube-system" namespace has status "Ready":"True"
	I1212 22:39:12.233210   99930 pod_ready.go:81] duration metric: took 399.376916ms waiting for pod "kube-proxy-xfhnh" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:12.233220   99930 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:12.429536   99930 request.go:629] Waited for 196.246626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:39:12.429615   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-054207
	I1212 22:39:12.429622   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:12.429632   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:12.429642   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:12.432592   99930 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:39:12.432615   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:12.432623   99930 round_trippers.go:580]     Audit-Id: 4ab734ea-126f-44c0-baac-fa901a882f47
	I1212 22:39:12.432628   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:12.432633   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:12.432638   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:12.432644   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:12.432649   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:12 GMT
	I1212 22:39:12.432894   99930 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-054207","namespace":"kube-system","uid":"79f6cbd9-988a-4dc2-a910-15abd7598b9c","resourceVersion":"884","creationTimestamp":"2023-12-12T22:25:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.mirror":"0decf830d069a663b6d187c356fa2e3f","kubernetes.io/config.seen":"2023-12-12T22:25:01.374250221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1212 22:39:12.629679   99930 request.go:629] Waited for 196.283421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:39:12.629753   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes/multinode-054207
	I1212 22:39:12.629758   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:12.629767   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:12.629774   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:12.632957   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:39:12.632984   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:12.632992   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:12.632997   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:12.633003   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:12.633008   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:12 GMT
	I1212 22:39:12.633013   99930 round_trippers.go:580]     Audit-Id: 53778e3e-b55f-4ffb-9505-586220d6dc58
	I1212 22:39:12.633017   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:12.633839   99930 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:25:07Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1212 22:39:12.634176   99930 pod_ready.go:92] pod "kube-scheduler-multinode-054207" in "kube-system" namespace has status "Ready":"True"
	I1212 22:39:12.634193   99930 pod_ready.go:81] duration metric: took 400.967147ms waiting for pod "kube-scheduler-multinode-054207" in "kube-system" namespace to be "Ready" ...
	I1212 22:39:12.634204   99930 pod_ready.go:38] duration metric: took 1.600521682s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:39:12.634223   99930 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:39:12.634289   99930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:39:12.648837   99930 system_svc.go:56] duration metric: took 14.606454ms WaitForService to wait for kubelet.
	I1212 22:39:12.648869   99930 kubeadm.go:581] duration metric: took 1.638288453s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:39:12.648890   99930 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:39:12.830317   99930 request.go:629] Waited for 181.334372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.172:8443/api/v1/nodes
	I1212 22:39:12.830383   99930 round_trippers.go:463] GET https://192.168.39.172:8443/api/v1/nodes
	I1212 22:39:12.830390   99930 round_trippers.go:469] Request Headers:
	I1212 22:39:12.830400   99930 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:39:12.830408   99930 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:39:12.833455   99930 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:39:12.833484   99930 round_trippers.go:577] Response Headers:
	I1212 22:39:12.833495   99930 round_trippers.go:580]     Content-Type: application/json
	I1212 22:39:12.833503   99930 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38d04af6-15c5-4554-8892-05ca22191f0b
	I1212 22:39:12.833511   99930 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e3f09de6-dc39-417c-9260-f80b4b84166f
	I1212 22:39:12.833517   99930 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:39:12 GMT
	I1212 22:39:12.833525   99930 round_trippers.go:580]     Audit-Id: 3cb20b73-6b07-4a8b-b255-4d04aa4be2d2
	I1212 22:39:12.833533   99930 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:39:12.833936   99930 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1249"},"items":[{"metadata":{"name":"multinode-054207","uid":"2e3be68f-f33e-487a-b58a-5a8ee04c2ba9","resourceVersion":"905","creationTimestamp":"2023-12-12T22:25:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-054207","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-054207","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_25_11_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16466 chars]
	I1212 22:39:12.834542   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:39:12.834562   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:39:12.834573   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:39:12.834577   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:39:12.834581   99930 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 22:39:12.834585   99930 node_conditions.go:123] node cpu capacity is 2
	I1212 22:39:12.834601   99930 node_conditions.go:105] duration metric: took 185.707395ms to run NodePressure ...
	I1212 22:39:12.834612   99930 start.go:228] waiting for startup goroutines ...
	I1212 22:39:12.834632   99930 start.go:242] writing updated cluster config ...
	I1212 22:39:12.834908   99930 ssh_runner.go:195] Run: rm -f paused
	I1212 22:39:12.885847   99930 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 22:39:12.889296   99930 out.go:177] * Done! kubectl is now configured to use "multinode-054207" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 22:34:59 UTC, ends at Tue 2023-12-12 22:39:14 UTC. --
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.024171302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702420754024155724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=349566cf-f55d-41d2-b650-624d61e046de name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.024707007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b6f71912-bc5b-4a52-a6ef-a3ea3003d2b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.024762526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b6f71912-bc5b-4a52-a6ef-a3ea3003d2b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.025125004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a60ee18414ee6591ad0bfc3da87906a7c3bc6770a43e43b57d47e245424d3b70,PodSandboxId:7f0f12b6287f31c8905cb2599add672c048c40eb79fe47a61ab683f3b02f1c22,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702420563710563401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62295a570e9ed6539344fbcb27f3462794c2a51732fc6adec0930313b460d251,PodSandboxId:52d9b0045cda7106c0056e640b50212317aefb938cbaf4f0fb3c676479199064,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702420541170679634,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7fg9p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 220bf84f-c796-488d-8673-554f240fda87,},Annotations:map[string]string{io.kubernetes.container.hash: 69a025c0,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b4ba5bb392a28abc930b5b491da80d7ea57e0c75bd77d0e9e51b2bdd06538e,PodSandboxId:010ac0ac47bc09a1d9165a3611df5a7eb2d4fe97abc8024cc9bd5bceba5f6885,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702420539989267200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rj4p4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd5cacb-68c8-41e5-a91e-07e6a9739897,},Annotations:map[string]string{io.kubernetes.container.hash: b30da384,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108ec3098d4aa03aa87cbeb017d269b80ee666fc79562a5d022ed327ee613358,PodSandboxId:37c90bed7fb862fbbacc76f11c4c104d71de6fd7f8449297453f81ef68116ba2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702420534992730780,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nj2sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 947b4acb-082a-436b-b68f-d253f391ee24,},Annotations:map[string]string{io.kubernetes.container.hash: 59150d93,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfa73f23708fe10cae720ef21d56db81f2a0b083913e2ab28b17287ffbb4a3e,PodSandboxId:7f0f12b6287f31c8905cb2599add672c048c40eb79fe47a61ab683f3b02f1c22,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702420532669049824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da66b6e8030baf361230dc860df63d81adbcd8a3c38ddf6079d2bdf311facdec,PodSandboxId:7172b846e5f0e7c63b2f5c063cf1bd8b7edd9fa651a5570a7d87488a4a7b57fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702420532429082894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnx8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8875d71-d50e-44f1-92c1-db1858b4
b3bb,},Annotations:map[string]string{io.kubernetes.container.hash: efaa442f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ddf5ac835275a7ae661f0332a6bf7d5958698dc3cb7c5297da6337a6dc6513,PodSandboxId:a7f19f9b9242ab48750f1770794ccb059c0abc986ac5c62c3a70aa44f7d3e4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702420526862137062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0decf830d069a663b6d187c356fa2e3f,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af68aad17f9b3c06deeb81349d9bc85ec9df465c9d2de2b99ec3b5859534098a,PodSandboxId:1b614032e49ea8fc21910eeb148411f33076bf9b6b66c3f58579dee907b6a8c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702420526789062957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ea46a4c32716d2f532486a0df40c80,},Annotations:map[string]string{io.kubernetes.container.has
h: 984c1859,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c308c3e42f21317fab75409c93f6783e3a3b3e1bbece7f4bbbe4954b0c5fc986,PodSandboxId:cddb83e76d41d1b87090ba014f213f1712acdb0a8c7c30a331c847fce0fc167a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702420526745225666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cec9887dcff7004aa4082a4b73fb6ba,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42c4a3f69970f0ca897fd08aca99f4ba58ea821cf9935797a326360a8dfeba,PodSandboxId:63023cde6c16bb13cec2083b15dbae39a7ea750fbf8bf056a9b65f980327539b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702420526530355451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767f78d84df6cc4b5db4cd1537aebe27,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: fa2c3a7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b6f71912-bc5b-4a52-a6ef-a3ea3003d2b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.066683007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eb85382d-1f43-411b-ae98-8c5be5dc3ed4 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.066739951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eb85382d-1f43-411b-ae98-8c5be5dc3ed4 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.067932846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=85fe6a69-2e03-410c-8d00-af45bf671e52 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.068289190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702420754068276409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=85fe6a69-2e03-410c-8d00-af45bf671e52 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.068893291Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fee3b942-bdff-4e05-82ed-4c8ba9f82c55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.068999816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fee3b942-bdff-4e05-82ed-4c8ba9f82c55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.069216206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a60ee18414ee6591ad0bfc3da87906a7c3bc6770a43e43b57d47e245424d3b70,PodSandboxId:7f0f12b6287f31c8905cb2599add672c048c40eb79fe47a61ab683f3b02f1c22,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702420563710563401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62295a570e9ed6539344fbcb27f3462794c2a51732fc6adec0930313b460d251,PodSandboxId:52d9b0045cda7106c0056e640b50212317aefb938cbaf4f0fb3c676479199064,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702420541170679634,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7fg9p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 220bf84f-c796-488d-8673-554f240fda87,},Annotations:map[string]string{io.kubernetes.container.hash: 69a025c0,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b4ba5bb392a28abc930b5b491da80d7ea57e0c75bd77d0e9e51b2bdd06538e,PodSandboxId:010ac0ac47bc09a1d9165a3611df5a7eb2d4fe97abc8024cc9bd5bceba5f6885,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702420539989267200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rj4p4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd5cacb-68c8-41e5-a91e-07e6a9739897,},Annotations:map[string]string{io.kubernetes.container.hash: b30da384,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108ec3098d4aa03aa87cbeb017d269b80ee666fc79562a5d022ed327ee613358,PodSandboxId:37c90bed7fb862fbbacc76f11c4c104d71de6fd7f8449297453f81ef68116ba2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702420534992730780,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nj2sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 947b4acb-082a-436b-b68f-d253f391ee24,},Annotations:map[string]string{io.kubernetes.container.hash: 59150d93,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfa73f23708fe10cae720ef21d56db81f2a0b083913e2ab28b17287ffbb4a3e,PodSandboxId:7f0f12b6287f31c8905cb2599add672c048c40eb79fe47a61ab683f3b02f1c22,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702420532669049824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da66b6e8030baf361230dc860df63d81adbcd8a3c38ddf6079d2bdf311facdec,PodSandboxId:7172b846e5f0e7c63b2f5c063cf1bd8b7edd9fa651a5570a7d87488a4a7b57fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702420532429082894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnx8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8875d71-d50e-44f1-92c1-db1858b4
b3bb,},Annotations:map[string]string{io.kubernetes.container.hash: efaa442f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ddf5ac835275a7ae661f0332a6bf7d5958698dc3cb7c5297da6337a6dc6513,PodSandboxId:a7f19f9b9242ab48750f1770794ccb059c0abc986ac5c62c3a70aa44f7d3e4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702420526862137062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0decf830d069a663b6d187c356fa2e3f,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af68aad17f9b3c06deeb81349d9bc85ec9df465c9d2de2b99ec3b5859534098a,PodSandboxId:1b614032e49ea8fc21910eeb148411f33076bf9b6b66c3f58579dee907b6a8c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702420526789062957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ea46a4c32716d2f532486a0df40c80,},Annotations:map[string]string{io.kubernetes.container.has
h: 984c1859,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c308c3e42f21317fab75409c93f6783e3a3b3e1bbece7f4bbbe4954b0c5fc986,PodSandboxId:cddb83e76d41d1b87090ba014f213f1712acdb0a8c7c30a331c847fce0fc167a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702420526745225666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cec9887dcff7004aa4082a4b73fb6ba,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42c4a3f69970f0ca897fd08aca99f4ba58ea821cf9935797a326360a8dfeba,PodSandboxId:63023cde6c16bb13cec2083b15dbae39a7ea750fbf8bf056a9b65f980327539b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702420526530355451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767f78d84df6cc4b5db4cd1537aebe27,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: fa2c3a7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fee3b942-bdff-4e05-82ed-4c8ba9f82c55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.111717417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1f7ace16-0642-48fe-8d13-c2d72243c07f name=/runtime.v1.RuntimeService/Version
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.111774646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1f7ace16-0642-48fe-8d13-c2d72243c07f name=/runtime.v1.RuntimeService/Version
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.113451034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6f88ac5f-4f56-468a-be24-5e87ee7f8249 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.113803937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702420754113793060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6f88ac5f-4f56-468a-be24-5e87ee7f8249 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.114538233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=890209ce-1159-4e38-891f-1f2930ca9cb6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.114615004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=890209ce-1159-4e38-891f-1f2930ca9cb6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.114875227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a60ee18414ee6591ad0bfc3da87906a7c3bc6770a43e43b57d47e245424d3b70,PodSandboxId:7f0f12b6287f31c8905cb2599add672c048c40eb79fe47a61ab683f3b02f1c22,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702420563710563401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62295a570e9ed6539344fbcb27f3462794c2a51732fc6adec0930313b460d251,PodSandboxId:52d9b0045cda7106c0056e640b50212317aefb938cbaf4f0fb3c676479199064,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702420541170679634,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7fg9p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 220bf84f-c796-488d-8673-554f240fda87,},Annotations:map[string]string{io.kubernetes.container.hash: 69a025c0,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b4ba5bb392a28abc930b5b491da80d7ea57e0c75bd77d0e9e51b2bdd06538e,PodSandboxId:010ac0ac47bc09a1d9165a3611df5a7eb2d4fe97abc8024cc9bd5bceba5f6885,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702420539989267200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rj4p4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd5cacb-68c8-41e5-a91e-07e6a9739897,},Annotations:map[string]string{io.kubernetes.container.hash: b30da384,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108ec3098d4aa03aa87cbeb017d269b80ee666fc79562a5d022ed327ee613358,PodSandboxId:37c90bed7fb862fbbacc76f11c4c104d71de6fd7f8449297453f81ef68116ba2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702420534992730780,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nj2sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 947b4acb-082a-436b-b68f-d253f391ee24,},Annotations:map[string]string{io.kubernetes.container.hash: 59150d93,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfa73f23708fe10cae720ef21d56db81f2a0b083913e2ab28b17287ffbb4a3e,PodSandboxId:7f0f12b6287f31c8905cb2599add672c048c40eb79fe47a61ab683f3b02f1c22,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702420532669049824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da66b6e8030baf361230dc860df63d81adbcd8a3c38ddf6079d2bdf311facdec,PodSandboxId:7172b846e5f0e7c63b2f5c063cf1bd8b7edd9fa651a5570a7d87488a4a7b57fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702420532429082894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnx8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8875d71-d50e-44f1-92c1-db1858b4
b3bb,},Annotations:map[string]string{io.kubernetes.container.hash: efaa442f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ddf5ac835275a7ae661f0332a6bf7d5958698dc3cb7c5297da6337a6dc6513,PodSandboxId:a7f19f9b9242ab48750f1770794ccb059c0abc986ac5c62c3a70aa44f7d3e4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702420526862137062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0decf830d069a663b6d187c356fa2e3f,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af68aad17f9b3c06deeb81349d9bc85ec9df465c9d2de2b99ec3b5859534098a,PodSandboxId:1b614032e49ea8fc21910eeb148411f33076bf9b6b66c3f58579dee907b6a8c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702420526789062957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ea46a4c32716d2f532486a0df40c80,},Annotations:map[string]string{io.kubernetes.container.has
h: 984c1859,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c308c3e42f21317fab75409c93f6783e3a3b3e1bbece7f4bbbe4954b0c5fc986,PodSandboxId:cddb83e76d41d1b87090ba014f213f1712acdb0a8c7c30a331c847fce0fc167a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702420526745225666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cec9887dcff7004aa4082a4b73fb6ba,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42c4a3f69970f0ca897fd08aca99f4ba58ea821cf9935797a326360a8dfeba,PodSandboxId:63023cde6c16bb13cec2083b15dbae39a7ea750fbf8bf056a9b65f980327539b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702420526530355451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767f78d84df6cc4b5db4cd1537aebe27,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: fa2c3a7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=890209ce-1159-4e38-891f-1f2930ca9cb6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.159589115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a405947f-420c-4bdf-b1b3-266f698bdfa1 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.159683906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a405947f-420c-4bdf-b1b3-266f698bdfa1 name=/runtime.v1.RuntimeService/Version
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.162177881Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=87211768-5d48-466c-aaf9-588fe9d1d069 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.162660409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702420754162644695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=87211768-5d48-466c-aaf9-588fe9d1d069 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.163734262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6112a88a-5a38-4d03-89e9-ab963167e798 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.163778757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6112a88a-5a38-4d03-89e9-ab963167e798 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 22:39:14 multinode-054207 crio[711]: time="2023-12-12 22:39:14.164111485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a60ee18414ee6591ad0bfc3da87906a7c3bc6770a43e43b57d47e245424d3b70,PodSandboxId:7f0f12b6287f31c8905cb2599add672c048c40eb79fe47a61ab683f3b02f1c22,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702420563710563401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62295a570e9ed6539344fbcb27f3462794c2a51732fc6adec0930313b460d251,PodSandboxId:52d9b0045cda7106c0056e640b50212317aefb938cbaf4f0fb3c676479199064,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1702420541170679634,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-7fg9p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 220bf84f-c796-488d-8673-554f240fda87,},Annotations:map[string]string{io.kubernetes.container.hash: 69a025c0,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b4ba5bb392a28abc930b5b491da80d7ea57e0c75bd77d0e9e51b2bdd06538e,PodSandboxId:010ac0ac47bc09a1d9165a3611df5a7eb2d4fe97abc8024cc9bd5bceba5f6885,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702420539989267200,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rj4p4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bd5cacb-68c8-41e5-a91e-07e6a9739897,},Annotations:map[string]string{io.kubernetes.container.hash: b30da384,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108ec3098d4aa03aa87cbeb017d269b80ee666fc79562a5d022ed327ee613358,PodSandboxId:37c90bed7fb862fbbacc76f11c4c104d71de6fd7f8449297453f81ef68116ba2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1702420534992730780,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nj2sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 947b4acb-082a-436b-b68f-d253f391ee24,},Annotations:map[string]string{io.kubernetes.container.hash: 59150d93,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edfa73f23708fe10cae720ef21d56db81f2a0b083913e2ab28b17287ffbb4a3e,PodSandboxId:7f0f12b6287f31c8905cb2599add672c048c40eb79fe47a61ab683f3b02f1c22,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702420532669049824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 40d577b4-8d36-4f55-946d-92755b1d6998,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a38f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da66b6e8030baf361230dc860df63d81adbcd8a3c38ddf6079d2bdf311facdec,PodSandboxId:7172b846e5f0e7c63b2f5c063cf1bd8b7edd9fa651a5570a7d87488a4a7b57fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702420532429082894,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnx8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8875d71-d50e-44f1-92c1-db1858b4
b3bb,},Annotations:map[string]string{io.kubernetes.container.hash: efaa442f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ddf5ac835275a7ae661f0332a6bf7d5958698dc3cb7c5297da6337a6dc6513,PodSandboxId:a7f19f9b9242ab48750f1770794ccb059c0abc986ac5c62c3a70aa44f7d3e4fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702420526862137062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0decf830d069a663b6d187c356fa2e3f,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af68aad17f9b3c06deeb81349d9bc85ec9df465c9d2de2b99ec3b5859534098a,PodSandboxId:1b614032e49ea8fc21910eeb148411f33076bf9b6b66c3f58579dee907b6a8c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702420526789062957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ea46a4c32716d2f532486a0df40c80,},Annotations:map[string]string{io.kubernetes.container.has
h: 984c1859,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c308c3e42f21317fab75409c93f6783e3a3b3e1bbece7f4bbbe4954b0c5fc986,PodSandboxId:cddb83e76d41d1b87090ba014f213f1712acdb0a8c7c30a331c847fce0fc167a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702420526745225666,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cec9887dcff7004aa4082a4b73fb6ba,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42c4a3f69970f0ca897fd08aca99f4ba58ea821cf9935797a326360a8dfeba,PodSandboxId:63023cde6c16bb13cec2083b15dbae39a7ea750fbf8bf056a9b65f980327539b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702420526530355451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-054207,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767f78d84df6cc4b5db4cd1537aebe27,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: fa2c3a7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6112a88a-5a38-4d03-89e9-ab963167e798 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a60ee18414ee6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   7f0f12b6287f3       storage-provisioner
	62295a570e9ed       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   52d9b0045cda7       busybox-5bc68d56bd-7fg9p
	90b4ba5bb392a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   010ac0ac47bc0       coredns-5dd5756b68-rj4p4
	108ec3098d4aa       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   37c90bed7fb86       kindnet-nj2sh
	edfa73f23708f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   7f0f12b6287f3       storage-provisioner
	da66b6e8030ba       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   7172b846e5f0e       kube-proxy-rnx8m
	e0ddf5ac83527       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   a7f19f9b9242a       kube-scheduler-multinode-054207
	af68aad17f9b3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   1b614032e49ea       etcd-multinode-054207
	c308c3e42f213       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   cddb83e76d41d       kube-controller-manager-multinode-054207
	6a42c4a3f6997       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   63023cde6c16b       kube-apiserver-multinode-054207
	
	* 
	* ==> coredns [90b4ba5bb392a28abc930b5b491da80d7ea57e0c75bd77d0e9e51b2bdd06538e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41074 - 5161 "HINFO IN 709976796855519864.1728399270009873393. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009526111s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-054207
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-054207
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-054207
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_25_11_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:25:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-054207
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:39:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:36:02 +0000   Tue, 12 Dec 2023 22:25:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:36:02 +0000   Tue, 12 Dec 2023 22:25:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:36:02 +0000   Tue, 12 Dec 2023 22:25:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:36:02 +0000   Tue, 12 Dec 2023 22:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    multinode-054207
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6950ea5719804682b508b18d9ee9af78
	  System UUID:                6950ea57-1980-4682-b508-b18d9ee9af78
	  Boot ID:                    4c171f7d-5119-4f4f-aed3-044a392df34c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-7fg9p                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-rj4p4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-054207                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-nj2sh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-054207             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-054207    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-rnx8m                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-054207             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-054207 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-054207 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-054207 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-054207 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-054207 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-054207 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-054207 event: Registered Node multinode-054207 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-054207 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node multinode-054207 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node multinode-054207 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node multinode-054207 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m30s                  node-controller  Node multinode-054207 event: Registered Node multinode-054207 in Controller
	
	
	Name:               multinode-054207-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-054207-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-054207
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T22_39_10_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:37:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-054207-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:39:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:37:28 +0000   Tue, 12 Dec 2023 22:37:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:37:28 +0000   Tue, 12 Dec 2023 22:37:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:37:28 +0000   Tue, 12 Dec 2023 22:37:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:37:28 +0000   Tue, 12 Dec 2023 22:37:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    multinode-054207-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b9d7f94f20c44c4a4ef541de7398d6d
	  System UUID:                6b9d7f94-f20c-44c4-a4ef-541de7398d6d
	  Boot ID:                    c3f7ddfc-243c-448e-981f-b2e9326e1a38
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-vp85t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-gh2q6               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-jtfmt            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 104s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-054207-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-054207-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-054207-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-054207-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m52s                  kubelet     Node multinode-054207-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m12s (x2 over 3m12s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 106s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)    kubelet     Node multinode-054207-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)    kubelet     Node multinode-054207-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)    kubelet     Node multinode-054207-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                   kubelet     Node multinode-054207-m02 status is now: NodeReady
	
	
	Name:               multinode-054207-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-054207-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-054207
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T22_39_10_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:39:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-054207-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:39:10 +0000   Tue, 12 Dec 2023 22:39:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:39:10 +0000   Tue, 12 Dec 2023 22:39:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:39:10 +0000   Tue, 12 Dec 2023 22:39:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:39:10 +0000   Tue, 12 Dec 2023 22:39:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    multinode-054207-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 95c9f01242d7431cbf091f2fbb8b223b
	  System UUID:                95c9f012-42d7-431c-bf09-1f2fbb8b223b
	  Boot ID:                    a0011635-abfa-446e-9604-3d5e72ba6642
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nv24k    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-mth9w               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-xfhnh            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node multinode-054207-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node multinode-054207-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet          Node multinode-054207-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                kubelet          Node multinode-054207-m03 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeReady                11m                kubelet          Node multinode-054207-m03 status is now: NodeReady
	  Normal   NodeNotReady             70s                kubelet          Node multinode-054207-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        38s (x2 over 98s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientMemory  5s (x4 over 11m)   kubelet          Node multinode-054207-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x4 over 11m)   kubelet          Node multinode-054207-m03 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet          Node multinode-054207-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet          Node multinode-054207-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet          Node multinode-054207-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     5s (x4 over 11m)   kubelet          Node multinode-054207-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                4s                 kubelet          Node multinode-054207-m03 status is now: NodeReady
	  Normal   RegisteredNode           0s                 node-controller  Node multinode-054207-m03 event: Registered Node multinode-054207-m03 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 22:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068183] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.398114] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.184136] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158771] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec12 22:35] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.094262] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.110712] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.151702] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.110557] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.212135] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +17.043824] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [af68aad17f9b3c06deeb81349d9bc85ec9df465c9d2de2b99ec3b5859534098a] <==
	* {"level":"info","ts":"2023-12-12T22:35:28.62427Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T22:35:28.624296Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T22:35:28.624495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 switched to configuration voters=(13542811178640421969)"}
	{"level":"info","ts":"2023-12-12T22:35:28.624562Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","added-peer-id":"bbf1bb039b0d3451","added-peer-peer-urls":["https://192.168.39.172:2380"]}
	{"level":"info","ts":"2023-12-12T22:35:28.624672Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:35:28.624716Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:35:28.626616Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T22:35:28.626914Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"bbf1bb039b0d3451","initial-advertise-peer-urls":["https://192.168.39.172:2380"],"listen-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.172:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T22:35:28.627949Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T22:35:28.628087Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2023-12-12T22:35:28.628115Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2023-12-12T22:35:29.993302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T22:35:29.993422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T22:35:29.993481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgPreVoteResp from bbf1bb039b0d3451 at term 2"}
	{"level":"info","ts":"2023-12-12T22:35:29.993523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T22:35:29.993549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgVoteResp from bbf1bb039b0d3451 at term 3"}
	{"level":"info","ts":"2023-12-12T22:35:29.993576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became leader at term 3"}
	{"level":"info","ts":"2023-12-12T22:35:29.993601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbf1bb039b0d3451 elected leader bbf1bb039b0d3451 at term 3"}
	{"level":"info","ts":"2023-12-12T22:35:29.997497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T22:35:29.998537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.172:2379"}
	{"level":"info","ts":"2023-12-12T22:35:29.998953Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T22:35:29.999723Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T22:35:29.997442Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"bbf1bb039b0d3451","local-member-attributes":"{Name:multinode-054207 ClientURLs:[https://192.168.39.172:2379]}","request-path":"/0/members/bbf1bb039b0d3451/attributes","cluster-id":"a5f5c7bb54d744d4","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T22:35:30.000254Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T22:35:30.000301Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  22:39:14 up 4 min,  0 users,  load average: 0.83, 0.40, 0.17
	Linux multinode-054207 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [108ec3098d4aa03aa87cbeb017d269b80ee666fc79562a5d022ed327ee613358] <==
	* I1212 22:38:26.752578       1 main.go:250] Node multinode-054207-m03 has CIDR [10.244.3.0/24] 
	I1212 22:38:36.759767       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:38:36.759887       1 main.go:227] handling current node
	I1212 22:38:36.759905       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I1212 22:38:36.759912       1 main.go:250] Node multinode-054207-m02 has CIDR [10.244.1.0/24] 
	I1212 22:38:36.760034       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I1212 22:38:36.760070       1 main.go:250] Node multinode-054207-m03 has CIDR [10.244.3.0/24] 
	I1212 22:38:46.770744       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:38:46.770798       1 main.go:227] handling current node
	I1212 22:38:46.770810       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I1212 22:38:46.770903       1 main.go:250] Node multinode-054207-m02 has CIDR [10.244.1.0/24] 
	I1212 22:38:46.771021       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I1212 22:38:46.771058       1 main.go:250] Node multinode-054207-m03 has CIDR [10.244.3.0/24] 
	I1212 22:38:56.775667       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:38:56.775722       1 main.go:227] handling current node
	I1212 22:38:56.775735       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I1212 22:38:56.775742       1 main.go:250] Node multinode-054207-m02 has CIDR [10.244.1.0/24] 
	I1212 22:38:56.775903       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I1212 22:38:56.775938       1 main.go:250] Node multinode-054207-m03 has CIDR [10.244.3.0/24] 
	I1212 22:39:06.779963       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I1212 22:39:06.780011       1 main.go:227] handling current node
	I1212 22:39:06.780022       1 main.go:223] Handling node with IPs: map[192.168.39.15:{}]
	I1212 22:39:06.780028       1 main.go:250] Node multinode-054207-m02 has CIDR [10.244.1.0/24] 
	I1212 22:39:06.780124       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I1212 22:39:06.780164       1 main.go:250] Node multinode-054207-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [6a42c4a3f69970f0ca897fd08aca99f4ba58ea821cf9935797a326360a8dfeba] <==
	* I1212 22:35:31.418634       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1212 22:35:31.418651       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 22:35:31.418678       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 22:35:31.418794       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 22:35:31.495527       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 22:35:31.522661       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 22:35:31.522742       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 22:35:31.524615       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 22:35:31.524669       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 22:35:31.524771       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 22:35:31.530504       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 22:35:31.534124       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 22:35:31.534186       1 aggregator.go:166] initial CRD sync complete...
	I1212 22:35:31.534210       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 22:35:31.534231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 22:35:31.534254       1 cache.go:39] Caches are synced for autoregister controller
	I1212 22:35:31.557510       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1212 22:35:31.600296       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 22:35:32.339099       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 22:35:34.056754       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 22:35:34.209010       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 22:35:34.222365       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 22:35:34.291664       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 22:35:34.298959       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 22:36:21.279494       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [c308c3e42f21317fab75409c93f6783e3a3b3e1bbece7f4bbbe4954b0c5fc986] <==
	* I1212 22:37:28.348999       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-trmtr" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-trmtr"
	I1212 22:37:28.367756       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-054207-m02" podCIDRs=["10.244.1.0/24"]
	I1212 22:37:28.391533       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-054207-m02"
	I1212 22:37:29.240185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="91.447µs"
	I1212 22:37:42.627374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="230.305µs"
	I1212 22:37:43.114066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.599µs"
	I1212 22:37:43.120521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.902µs"
	I1212 22:38:04.564265       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-054207-m02"
	I1212 22:39:06.119960       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-vp85t"
	I1212 22:39:06.131548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.779306ms"
	I1212 22:39:06.151766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.927396ms"
	I1212 22:39:06.152128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="120.127µs"
	I1212 22:39:06.155949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="162.275µs"
	I1212 22:39:07.379267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.991674ms"
	I1212 22:39:07.379657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="121.873µs"
	I1212 22:39:09.109681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.051µs"
	I1212 22:39:09.142774       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-054207-m02"
	I1212 22:39:09.239665       1 event.go:307] "Event occurred" object="multinode-054207-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-054207-m03 event: Removing Node multinode-054207-m03 from Controller"
	I1212 22:39:09.928251       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-054207-m02"
	I1212 22:39:09.929036       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-054207-m03\" does not exist"
	I1212 22:39:09.929991       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-nv24k" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-nv24k"
	I1212 22:39:09.964535       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-054207-m03" podCIDRs=["10.244.2.0/24"]
	I1212 22:39:10.075924       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-054207-m02"
	I1212 22:39:10.836423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="194.76µs"
	I1212 22:39:14.241054       1 event.go:307] "Event occurred" object="multinode-054207-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-054207-m03 event: Registered Node multinode-054207-m03 in Controller"
	
	* 
	* ==> kube-proxy [da66b6e8030baf361230dc860df63d81adbcd8a3c38ddf6079d2bdf311facdec] <==
	* I1212 22:35:32.799166       1 server_others.go:69] "Using iptables proxy"
	I1212 22:35:32.826448       1 node.go:141] Successfully retrieved node IP: 192.168.39.172
	I1212 22:35:32.890072       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 22:35:32.890119       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 22:35:32.892766       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:35:32.892902       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:35:32.893102       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:35:32.893136       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:35:32.894360       1 config.go:188] "Starting service config controller"
	I1212 22:35:32.894403       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:35:32.897044       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:35:32.897090       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:35:32.899443       1 config.go:315] "Starting node config controller"
	I1212 22:35:32.899641       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:35:32.994568       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:35:32.999810       1 shared_informer.go:318] Caches are synced for node config
	I1212 22:35:32.999898       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [e0ddf5ac835275a7ae661f0332a6bf7d5958698dc3cb7c5297da6337a6dc6513] <==
	* I1212 22:35:28.990523       1 serving.go:348] Generated self-signed cert in-memory
	W1212 22:35:31.455104       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 22:35:31.455222       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 22:35:31.455244       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 22:35:31.455251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 22:35:31.537267       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 22:35:31.537387       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:35:31.544076       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 22:35:31.544200       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 22:35:31.546214       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 22:35:31.546304       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 22:35:31.644992       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 22:34:59 UTC, ends at Tue 2023-12-12 22:39:14 UTC. --
	Dec 12 22:35:33 multinode-054207 kubelet[916]: E1212 22:35:33.188984     916 projected.go:198] Error preparing data for projected volume kube-api-access-8tfmd for pod default/busybox-5bc68d56bd-7fg9p: object "default"/"kube-root-ca.crt" not registered
	Dec 12 22:35:33 multinode-054207 kubelet[916]: E1212 22:35:33.189042     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/220bf84f-c796-488d-8673-554f240fda87-kube-api-access-8tfmd podName:220bf84f-c796-488d-8673-554f240fda87 nodeName:}" failed. No retries permitted until 2023-12-12 22:35:35.189027817 +0000 UTC m=+9.963289456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8tfmd" (UniqueName: "kubernetes.io/projected/220bf84f-c796-488d-8673-554f240fda87-kube-api-access-8tfmd") pod "busybox-5bc68d56bd-7fg9p" (UID: "220bf84f-c796-488d-8673-554f240fda87") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 22:35:33 multinode-054207 kubelet[916]: E1212 22:35:33.521145     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-rj4p4" podUID="8bd5cacb-68c8-41e5-a91e-07e6a9739897"
	Dec 12 22:35:33 multinode-054207 kubelet[916]: E1212 22:35:33.521513     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-7fg9p" podUID="220bf84f-c796-488d-8673-554f240fda87"
	Dec 12 22:35:35 multinode-054207 kubelet[916]: E1212 22:35:35.104058     916 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 22:35:35 multinode-054207 kubelet[916]: E1212 22:35:35.104136     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8bd5cacb-68c8-41e5-a91e-07e6a9739897-config-volume podName:8bd5cacb-68c8-41e5-a91e-07e6a9739897 nodeName:}" failed. No retries permitted until 2023-12-12 22:35:39.104121549 +0000 UTC m=+13.878383176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8bd5cacb-68c8-41e5-a91e-07e6a9739897-config-volume") pod "coredns-5dd5756b68-rj4p4" (UID: "8bd5cacb-68c8-41e5-a91e-07e6a9739897") : object "kube-system"/"coredns" not registered
	Dec 12 22:35:35 multinode-054207 kubelet[916]: E1212 22:35:35.204736     916 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 12 22:35:35 multinode-054207 kubelet[916]: E1212 22:35:35.204770     916 projected.go:198] Error preparing data for projected volume kube-api-access-8tfmd for pod default/busybox-5bc68d56bd-7fg9p: object "default"/"kube-root-ca.crt" not registered
	Dec 12 22:35:35 multinode-054207 kubelet[916]: E1212 22:35:35.204967     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/220bf84f-c796-488d-8673-554f240fda87-kube-api-access-8tfmd podName:220bf84f-c796-488d-8673-554f240fda87 nodeName:}" failed. No retries permitted until 2023-12-12 22:35:39.204941246 +0000 UTC m=+13.979202882 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8tfmd" (UniqueName: "kubernetes.io/projected/220bf84f-c796-488d-8673-554f240fda87-kube-api-access-8tfmd") pod "busybox-5bc68d56bd-7fg9p" (UID: "220bf84f-c796-488d-8673-554f240fda87") : object "default"/"kube-root-ca.crt" not registered
	Dec 12 22:35:35 multinode-054207 kubelet[916]: E1212 22:35:35.520792     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-7fg9p" podUID="220bf84f-c796-488d-8673-554f240fda87"
	Dec 12 22:35:35 multinode-054207 kubelet[916]: E1212 22:35:35.521237     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-rj4p4" podUID="8bd5cacb-68c8-41e5-a91e-07e6a9739897"
	Dec 12 22:35:36 multinode-054207 kubelet[916]: I1212 22:35:36.794772     916 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 22:36:03 multinode-054207 kubelet[916]: I1212 22:36:03.681792     916 scope.go:117] "RemoveContainer" containerID="edfa73f23708fe10cae720ef21d56db81f2a0b083913e2ab28b17287ffbb4a3e"
	Dec 12 22:36:25 multinode-054207 kubelet[916]: E1212 22:36:25.643163     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 22:36:25 multinode-054207 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 22:36:25 multinode-054207 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 22:36:25 multinode-054207 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 22:37:25 multinode-054207 kubelet[916]: E1212 22:37:25.644600     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 22:37:25 multinode-054207 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 22:37:25 multinode-054207 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 22:37:25 multinode-054207 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 22:38:25 multinode-054207 kubelet[916]: E1212 22:38:25.646009     916 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 22:38:25 multinode-054207 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 22:38:25 multinode-054207 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 22:38:25 multinode-054207 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-054207 -n multinode-054207
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-054207 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (687.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 stop
E1212 22:39:17.802939   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:40:25.172327   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054207 stop: exit status 82 (2m1.227253308s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-054207"  ...
	* Stopping node "multinode-054207"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-054207 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054207 status: exit status 3 (18.645114302s)

                                                
                                                
-- stdout --
	multinode-054207
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-054207-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 22:41:37.379636  102245 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	E1212 22:41:37.379676  102245 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-054207 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-054207 -n multinode-054207
E1212 22:41:39.570093   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-054207 -n multinode-054207: exit status 3 (3.186886801s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 22:41:40.739645  102354 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	E1212 22:41:40.739669  102354 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-054207" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.06s)

                                                
                                    
x
+
TestPreload (281.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-113301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1212 22:50:25.172054   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:51:39.568882   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-113301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m19.866230369s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-113301 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-113301 image pull gcr.io/k8s-minikube/busybox: (1.188326215s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-113301
E1212 22:52:20.850501   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:54:17.803605   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-113301: exit status 82 (2m1.090348502s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-113301"  ...
	* Stopping node "test-preload-113301"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-113301 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-12-12 22:54:20.877371212 +0000 UTC m=+3100.688428672
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-113301 -n test-preload-113301
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-113301 -n test-preload-113301: exit status 3 (18.587275555s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 22:54:39.459595  105360 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host
	E1212 22:54:39.459617  105360 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-113301" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-113301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-113301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-113301: (1.113343099s)
--- FAIL: TestPreload (281.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (139.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1151813769.exe start -p running-upgrade-535392 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1212 23:01:39.568758   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1151813769.exe start -p running-upgrade-535392 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m14.41672932s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-535392 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-535392 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (3.410205004s)

                                                
                                                
-- stdout --
	* [running-upgrade-535392] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-535392 in cluster running-upgrade-535392
	* Updating the running kvm2 "running-upgrade-535392" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:03:46.726935  113928 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:03:46.727084  113928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:03:46.727093  113928 out.go:309] Setting ErrFile to fd 2...
	I1212 23:03:46.727098  113928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:03:46.727346  113928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:03:46.727941  113928 out.go:303] Setting JSON to false
	I1212 23:03:46.728909  113928 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13581,"bootTime":1702408646,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:03:46.728971  113928 start.go:138] virtualization: kvm guest
	I1212 23:03:46.731514  113928 out.go:177] * [running-upgrade-535392] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:03:46.733519  113928 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:03:46.735007  113928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:03:46.733537  113928 notify.go:220] Checking for updates...
	I1212 23:03:46.743301  113928 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:03:46.745405  113928 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:03:46.747370  113928 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:03:46.749898  113928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:03:46.751884  113928 config.go:182] Loaded profile config "running-upgrade-535392": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 23:03:46.751904  113928 start_flags.go:694] config upgrade: Driver=kvm2
	I1212 23:03:46.751917  113928 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 23:03:46.752005  113928 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/running-upgrade-535392/config.json ...
	I1212 23:03:46.752740  113928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:03:46.752816  113928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:03:46.768367  113928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I1212 23:03:46.768916  113928 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:03:46.769493  113928 main.go:141] libmachine: Using API Version  1
	I1212 23:03:46.769517  113928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:03:46.769842  113928 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:03:46.770021  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:46.772295  113928 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 23:03:46.773792  113928 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:03:46.774200  113928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:03:46.774245  113928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:03:46.789762  113928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I1212 23:03:46.790243  113928 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:03:46.790785  113928 main.go:141] libmachine: Using API Version  1
	I1212 23:03:46.790811  113928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:03:46.791569  113928 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:03:46.793340  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:46.832859  113928 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:03:46.834416  113928 start.go:298] selected driver: kvm2
	I1212 23:03:46.834431  113928 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-535392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.196 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 23:03:46.834519  113928 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:03:46.835223  113928 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.835377  113928 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:03:46.850998  113928 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:03:46.851425  113928 cni.go:84] Creating CNI manager for ""
	I1212 23:03:46.851448  113928 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1212 23:03:46.851461  113928 start_flags.go:323] config:
	{Name:running-upgrade-535392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.196 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 23:03:46.851675  113928 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.853402  113928 out.go:177] * Starting control plane node running-upgrade-535392 in cluster running-upgrade-535392
	I1212 23:03:46.854773  113928 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1212 23:03:46.882912  113928 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 23:03:46.883040  113928 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/running-upgrade-535392/config.json ...
	I1212 23:03:46.883230  113928 cache.go:107] acquiring lock: {Name:mk9c35ce9554730e909f93e76069b5a5c2630899 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.883256  113928 cache.go:107] acquiring lock: {Name:mk2563cec006420ef10c9c439073e1fffa94c73f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.883284  113928 cache.go:107] acquiring lock: {Name:mk086d32497c3e65a5205d99c7c502384a812a30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.883332  113928 start.go:365] acquiring machines lock for running-upgrade-535392: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:03:46.883370  113928 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1212 23:03:46.883367  113928 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1212 23:03:46.883339  113928 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1212 23:03:46.883384  113928 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 102.806µs
	I1212 23:03:46.883384  113928 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 150.374µs
	I1212 23:03:46.883390  113928 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 170.342µs
	I1212 23:03:46.883393  113928 start.go:369] acquired machines lock for "running-upgrade-535392" in 45.26µs
	I1212 23:03:46.883397  113928 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1212 23:03:46.883396  113928 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1212 23:03:46.883400  113928 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1212 23:03:46.883221  113928 cache.go:107] acquiring lock: {Name:mk7325cf87972093bbcb973759a11578551a5fae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.883410  113928 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:03:46.883418  113928 fix.go:54] fixHost starting: minikube
	I1212 23:03:46.883409  113928 cache.go:107] acquiring lock: {Name:mk77d5176eba686c2bca847bc4c639c3ea1cfbb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.883438  113928 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 23:03:46.883387  113928 cache.go:107] acquiring lock: {Name:mke6d85c058dd7936a7a40cad769bdda28d5f307 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.883457  113928 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1212 23:03:46.883464  113928 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 57.468µs
	I1212 23:03:46.883478  113928 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1212 23:03:46.883445  113928 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 241.733µs
	I1212 23:03:46.883362  113928 cache.go:107] acquiring lock: {Name:mke11283f995e2095af76efdf2ed5615968ea41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.883554  113928 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1212 23:03:46.883567  113928 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 216.04µs
	I1212 23:03:46.883579  113928 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1212 23:03:46.883611  113928 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 257.804µs
	I1212 23:03:46.883582  113928 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1212 23:03:46.883628  113928 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1212 23:03:46.883615  113928 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 23:03:46.883724  113928 cache.go:107] acquiring lock: {Name:mkcaddb005cc4f087c17f0ea5c1952d152fdbb1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:03:46.883820  113928 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1212 23:03:46.883834  113928 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 121.153µs
	I1212 23:03:46.883847  113928 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1212 23:03:46.883880  113928 cache.go:87] Successfully saved all images to host disk.
	I1212 23:03:46.883847  113928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:03:46.884011  113928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:03:46.899001  113928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I1212 23:03:46.899486  113928 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:03:46.899961  113928 main.go:141] libmachine: Using API Version  1
	I1212 23:03:46.899994  113928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:03:46.900310  113928 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:03:46.900485  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:46.900669  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetState
	I1212 23:03:46.902425  113928 fix.go:102] recreateIfNeeded on running-upgrade-535392: state=Running err=<nil>
	W1212 23:03:46.902446  113928 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:03:46.907312  113928 out.go:177] * Updating the running kvm2 "running-upgrade-535392" VM ...
	I1212 23:03:46.908828  113928 machine.go:88] provisioning docker machine ...
	I1212 23:03:46.908854  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:46.909103  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetMachineName
	I1212 23:03:46.909306  113928 buildroot.go:166] provisioning hostname "running-upgrade-535392"
	I1212 23:03:46.909325  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetMachineName
	I1212 23:03:46.909467  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:46.912163  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:46.912655  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:46.912697  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:46.912824  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHPort
	I1212 23:03:46.913026  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:46.913196  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:46.913405  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHUsername
	I1212 23:03:46.913572  113928 main.go:141] libmachine: Using SSH client type: native
	I1212 23:03:46.914024  113928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1212 23:03:46.914046  113928 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-535392 && echo "running-upgrade-535392" | sudo tee /etc/hostname
	I1212 23:03:47.035721  113928 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-535392
	
	I1212 23:03:47.035790  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:47.039389  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.039840  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:47.039895  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.040125  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHPort
	I1212 23:03:47.040372  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:47.040613  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:47.040799  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHUsername
	I1212 23:03:47.041015  113928 main.go:141] libmachine: Using SSH client type: native
	I1212 23:03:47.041434  113928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1212 23:03:47.041466  113928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-535392' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-535392/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-535392' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:03:47.156010  113928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:03:47.156050  113928 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:03:47.156100  113928 buildroot.go:174] setting up certificates
	I1212 23:03:47.156112  113928 provision.go:83] configureAuth start
	I1212 23:03:47.156131  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetMachineName
	I1212 23:03:47.156446  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetIP
	I1212 23:03:47.159498  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.159948  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:47.159988  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.160321  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:47.162549  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.162950  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:47.162995  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.163075  113928 provision.go:138] copyHostCerts
	I1212 23:03:47.163161  113928 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:03:47.163175  113928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:03:47.163251  113928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:03:47.163381  113928 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:03:47.163394  113928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:03:47.163423  113928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:03:47.163521  113928 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:03:47.163534  113928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:03:47.163562  113928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:03:47.163641  113928 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-535392 san=[192.168.50.196 192.168.50.196 localhost 127.0.0.1 minikube running-upgrade-535392]
	I1212 23:03:47.404601  113928 provision.go:172] copyRemoteCerts
	I1212 23:03:47.404675  113928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:03:47.404701  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:47.407210  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.407631  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:47.408005  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.410358  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHPort
	I1212 23:03:47.410834  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:47.411065  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHUsername
	I1212 23:03:47.411250  113928 sshutil.go:53] new ssh client: &{IP:192.168.50.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/running-upgrade-535392/id_rsa Username:docker}
	I1212 23:03:47.500006  113928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 23:03:47.516659  113928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:03:47.534472  113928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:03:47.557670  113928 provision.go:86] duration metric: configureAuth took 401.537303ms
	I1212 23:03:47.557719  113928 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:03:47.558297  113928 config.go:182] Loaded profile config "running-upgrade-535392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 23:03:47.558398  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:47.561332  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.561716  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:47.561755  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:47.561939  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHPort
	I1212 23:03:47.562162  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:47.562345  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:47.562476  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHUsername
	I1212 23:03:47.562607  113928 main.go:141] libmachine: Using SSH client type: native
	I1212 23:03:47.562923  113928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1212 23:03:47.562949  113928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:03:48.071088  113928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:03:48.071116  113928 machine.go:91] provisioned docker machine in 1.16227032s
	I1212 23:03:48.071129  113928 start.go:300] post-start starting for "running-upgrade-535392" (driver="kvm2")
	I1212 23:03:48.071138  113928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:03:48.071161  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:48.071517  113928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:03:48.071552  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:48.074325  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.074722  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:48.074752  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.074988  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHPort
	I1212 23:03:48.075201  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:48.075396  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHUsername
	I1212 23:03:48.075531  113928 sshutil.go:53] new ssh client: &{IP:192.168.50.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/running-upgrade-535392/id_rsa Username:docker}
	I1212 23:03:48.164262  113928 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:03:48.170199  113928 info.go:137] Remote host: Buildroot 2019.02.7
	I1212 23:03:48.170232  113928 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:03:48.170320  113928 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:03:48.170428  113928 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:03:48.170540  113928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:03:48.180539  113928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:03:48.200840  113928 start.go:303] post-start completed in 129.692444ms
	I1212 23:03:48.200880  113928 fix.go:56] fixHost completed within 1.317459524s
	I1212 23:03:48.200910  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:48.204016  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.204445  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:48.204502  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.204677  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHPort
	I1212 23:03:48.204909  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:48.205063  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:48.205199  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHUsername
	I1212 23:03:48.205399  113928 main.go:141] libmachine: Using SSH client type: native
	I1212 23:03:48.205850  113928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.196 22 <nil> <nil>}
	I1212 23:03:48.205866  113928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 23:03:48.324899  113928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422228.320165349
	
	I1212 23:03:48.324932  113928 fix.go:206] guest clock: 1702422228.320165349
	I1212 23:03:48.324947  113928 fix.go:219] Guest: 2023-12-12 23:03:48.320165349 +0000 UTC Remote: 2023-12-12 23:03:48.200885347 +0000 UTC m=+1.530338903 (delta=119.280002ms)
	I1212 23:03:48.324988  113928 fix.go:190] guest clock delta is within tolerance: 119.280002ms
	I1212 23:03:48.324999  113928 start.go:83] releasing machines lock for "running-upgrade-535392", held for 1.441596499s
	I1212 23:03:48.325029  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:48.325335  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetIP
	I1212 23:03:48.328252  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.328762  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:48.328793  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.329032  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:48.329569  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:48.329753  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .DriverName
	I1212 23:03:48.329829  113928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:03:48.329897  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:48.329910  113928 ssh_runner.go:195] Run: cat /version.json
	I1212 23:03:48.329928  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHHostname
	I1212 23:03:48.332967  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.333127  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.333425  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:48.333457  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.333637  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHPort
	I1212 23:03:48.333754  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:28:0e", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:02:02 +0000 UTC Type:0 Mac:52:54:00:ac:28:0e Iaid: IPaddr:192.168.50.196 Prefix:24 Hostname:running-upgrade-535392 Clientid:01:52:54:00:ac:28:0e}
	I1212 23:03:48.333802  113928 main.go:141] libmachine: (running-upgrade-535392) DBG | domain running-upgrade-535392 has defined IP address 192.168.50.196 and MAC address 52:54:00:ac:28:0e in network minikube-net
	I1212 23:03:48.333843  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:48.333988  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHPort
	I1212 23:03:48.334160  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHKeyPath
	I1212 23:03:48.334202  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHUsername
	I1212 23:03:48.334380  113928 main.go:141] libmachine: (running-upgrade-535392) Calling .GetSSHUsername
	I1212 23:03:48.334378  113928 sshutil.go:53] new ssh client: &{IP:192.168.50.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/running-upgrade-535392/id_rsa Username:docker}
	I1212 23:03:48.334534  113928 sshutil.go:53] new ssh client: &{IP:192.168.50.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/running-upgrade-535392/id_rsa Username:docker}
	W1212 23:03:48.416667  113928 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 23:03:48.416762  113928 ssh_runner.go:195] Run: systemctl --version
	I1212 23:03:48.448984  113928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:03:48.524902  113928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:03:48.531410  113928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:03:48.531495  113928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:03:48.538426  113928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 23:03:48.538458  113928 start.go:475] detecting cgroup driver to use...
	I1212 23:03:48.538548  113928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:03:48.549687  113928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:03:48.561824  113928 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:03:48.561898  113928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:03:48.572945  113928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:03:48.584979  113928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 23:03:48.596268  113928 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 23:03:48.596352  113928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:03:48.733697  113928 docker.go:219] disabling docker service ...
	I1212 23:03:48.733799  113928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:03:49.757875  113928 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.024044673s)
	I1212 23:03:49.757955  113928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:03:49.771653  113928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:03:49.917550  113928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:03:50.035233  113928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:03:50.044856  113928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:03:50.056695  113928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 23:03:50.056752  113928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:03:50.067105  113928 out.go:177] 
	W1212 23:03:50.068420  113928 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 23:03:50.068437  113928 out.go:239] * 
	* 
	W1212 23:03:50.069391  113928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:03:50.070614  113928 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-535392 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-12 23:03:50.088563502 +0000 UTC m=+3669.899620942
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-535392 -n running-upgrade-535392
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-535392 -n running-upgrade-535392: exit status 4 (269.750189ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:03:50.324446  113979 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-535392" does not appear in /home/jenkins/minikube-integration/17761-76611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-535392" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-535392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-535392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-535392: (1.46860951s)
--- FAIL: TestRunningBinaryUpgrade (139.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (306.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2104352341.exe start -p stopped-upgrade-809686 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2104352341.exe start -p stopped-upgrade-809686 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m17.248028014s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2104352341.exe -p stopped-upgrade-809686 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2104352341.exe -p stopped-upgrade-809686 stop: (1m33.083938329s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-809686 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-809686 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m15.942780346s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-809686] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-809686 in cluster stopped-upgrade-809686
	* Restarting existing kvm2 VM for "stopped-upgrade-809686" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:00:33.445020  111680 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:00:33.445248  111680 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:00:33.445262  111680 out.go:309] Setting ErrFile to fd 2...
	I1212 23:00:33.445270  111680 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:00:33.445618  111680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:00:33.446440  111680 out.go:303] Setting JSON to false
	I1212 23:00:33.447814  111680 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13387,"bootTime":1702408646,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:00:33.447905  111680 start.go:138] virtualization: kvm guest
	I1212 23:00:33.450891  111680 out.go:177] * [stopped-upgrade-809686] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:00:33.453078  111680 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:00:33.453074  111680 notify.go:220] Checking for updates...
	I1212 23:00:33.455422  111680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:00:33.457275  111680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:00:33.458902  111680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:00:33.460498  111680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:00:33.462204  111680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:00:33.464288  111680 config.go:182] Loaded profile config "stopped-upgrade-809686": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 23:00:33.464318  111680 start_flags.go:694] config upgrade: Driver=kvm2
	I1212 23:00:33.464341  111680 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 23:00:33.464444  111680 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/stopped-upgrade-809686/config.json ...
	I1212 23:00:33.465500  111680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:00:33.465627  111680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:00:33.486142  111680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I1212 23:00:33.486684  111680 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:00:33.487332  111680 main.go:141] libmachine: Using API Version  1
	I1212 23:00:33.487355  111680 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:00:33.487674  111680 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:00:33.487857  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:00:33.490100  111680 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 23:00:33.491850  111680 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:00:33.492164  111680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:00:33.492201  111680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:00:33.508115  111680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I1212 23:00:33.508767  111680 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:00:33.509496  111680 main.go:141] libmachine: Using API Version  1
	I1212 23:00:33.509526  111680 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:00:33.509887  111680 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:00:33.510094  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:00:33.552784  111680 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:00:33.554790  111680 start.go:298] selected driver: kvm2
	I1212 23:00:33.554812  111680 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-809686 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.98 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 23:00:33.554925  111680 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:00:33.555712  111680 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.555824  111680 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:00:33.572754  111680 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:00:33.573330  111680 cni.go:84] Creating CNI manager for ""
	I1212 23:00:33.573360  111680 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1212 23:00:33.573394  111680 start_flags.go:323] config:
	{Name:stopped-upgrade-809686 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.98 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 23:00:33.573620  111680 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.576302  111680 out.go:177] * Starting control plane node stopped-upgrade-809686 in cluster stopped-upgrade-809686
	I1212 23:00:33.578536  111680 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1212 23:00:33.612850  111680 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 23:00:33.613022  111680 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/stopped-upgrade-809686/config.json ...
	I1212 23:00:33.613057  111680 cache.go:107] acquiring lock: {Name:mk7325cf87972093bbcb973759a11578551a5fae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.613081  111680 cache.go:107] acquiring lock: {Name:mk086d32497c3e65a5205d99c7c502384a812a30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.613148  111680 cache.go:107] acquiring lock: {Name:mke11283f995e2095af76efdf2ed5615968ea41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.613183  111680 cache.go:107] acquiring lock: {Name:mk77d5176eba686c2bca847bc4c639c3ea1cfbb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.613273  111680 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1212 23:00:33.613350  111680 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1212 23:00:33.613370  111680 start.go:365] acquiring machines lock for stopped-upgrade-809686: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:00:33.613508  111680 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1212 23:00:33.613567  111680 cache.go:107] acquiring lock: {Name:mk9c35ce9554730e909f93e76069b5a5c2630899 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.613543  111680 cache.go:107] acquiring lock: {Name:mk2563cec006420ef10c9c439073e1fffa94c73f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.613665  111680 cache.go:107] acquiring lock: {Name:mke6d85c058dd7936a7a40cad769bdda28d5f307 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.613695  111680 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1212 23:00:33.613781  111680 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 23:00:33.613846  111680 cache.go:107] acquiring lock: {Name:mkcaddb005cc4f087c17f0ea5c1952d152fdbb1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:00:33.613940  111680 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 23:00:33.613169  111680 cache.go:115] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 23:00:33.613968  111680 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1212 23:00:33.613978  111680 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 940.868µs
	I1212 23:00:33.613994  111680 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 23:00:33.614788  111680 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1212 23:00:33.614837  111680 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 23:00:33.614865  111680 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1212 23:00:33.614918  111680 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 23:00:33.615093  111680 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1212 23:00:33.615157  111680 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1212 23:00:33.615871  111680 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1212 23:00:33.812516  111680 cache.go:162] opening:  /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 23:00:33.814051  111680 cache.go:162] opening:  /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1212 23:00:33.830333  111680 cache.go:162] opening:  /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1212 23:00:33.876994  111680 cache.go:162] opening:  /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1212 23:00:33.882767  111680 cache.go:162] opening:  /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1212 23:00:33.886401  111680 cache.go:162] opening:  /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1212 23:00:33.886426  111680 cache.go:162] opening:  /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1212 23:00:33.892328  111680 cache.go:157] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1212 23:00:33.892357  111680 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 278.694572ms
	I1212 23:00:33.892373  111680 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1212 23:00:34.339206  111680 cache.go:157] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1212 23:00:34.339266  111680 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 725.398207ms
	I1212 23:00:34.339284  111680 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1212 23:00:34.708167  111680 cache.go:157] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1212 23:00:34.708202  111680 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.095055343s
	I1212 23:00:34.708218  111680 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1212 23:00:34.900790  111680 cache.go:157] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1212 23:00:34.900817  111680 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.287753343s
	I1212 23:00:34.900831  111680 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1212 23:00:35.145100  111680 cache.go:157] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1212 23:00:35.145132  111680 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.53164905s
	I1212 23:00:35.145144  111680 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1212 23:00:35.373530  111680 cache.go:157] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1212 23:00:35.373557  111680 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.760375505s
	I1212 23:00:35.373570  111680 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1212 23:00:35.522294  111680 cache.go:157] /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1212 23:00:35.522323  111680 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.908761036s
	I1212 23:00:35.522336  111680 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1212 23:00:35.522355  111680 cache.go:87] Successfully saved all images to host disk.
	I1212 23:01:07.216523  111680 start.go:369] acquired machines lock for "stopped-upgrade-809686" in 33.603121014s
	I1212 23:01:07.216580  111680 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:01:07.216588  111680 fix.go:54] fixHost starting: minikube
	I1212 23:01:07.217040  111680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:01:07.217099  111680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:01:07.235527  111680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I1212 23:01:07.236093  111680 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:01:07.236652  111680 main.go:141] libmachine: Using API Version  1
	I1212 23:01:07.236678  111680 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:01:07.237182  111680 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:01:07.237406  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:01:07.237596  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetState
	I1212 23:01:07.239387  111680 fix.go:102] recreateIfNeeded on stopped-upgrade-809686: state=Stopped err=<nil>
	I1212 23:01:07.239427  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	W1212 23:01:07.239607  111680 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:01:07.241240  111680 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-809686" ...
	I1212 23:01:07.242831  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .Start
	I1212 23:01:07.243345  111680 main.go:141] libmachine: (stopped-upgrade-809686) Ensuring networks are active...
	I1212 23:01:07.243903  111680 main.go:141] libmachine: (stopped-upgrade-809686) Ensuring network default is active
	I1212 23:01:07.244347  111680 main.go:141] libmachine: (stopped-upgrade-809686) Ensuring network minikube-net is active
	I1212 23:01:07.244926  111680 main.go:141] libmachine: (stopped-upgrade-809686) Getting domain xml...
	I1212 23:01:07.245719  111680 main.go:141] libmachine: (stopped-upgrade-809686) Creating domain...
	I1212 23:01:08.589044  111680 main.go:141] libmachine: (stopped-upgrade-809686) Waiting to get IP...
	I1212 23:01:08.590066  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:08.590445  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:08.590552  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:08.590435  112199 retry.go:31] will retry after 256.694525ms: waiting for machine to come up
	I1212 23:01:08.848995  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:08.849584  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:08.849616  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:08.849539  112199 retry.go:31] will retry after 322.57443ms: waiting for machine to come up
	I1212 23:01:09.174297  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:09.174817  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:09.174843  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:09.174770  112199 retry.go:31] will retry after 362.273869ms: waiting for machine to come up
	I1212 23:01:09.538443  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:09.538953  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:09.538978  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:09.538900  112199 retry.go:31] will retry after 383.020175ms: waiting for machine to come up
	I1212 23:01:09.923720  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:09.924367  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:09.924409  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:09.924326  112199 retry.go:31] will retry after 542.799043ms: waiting for machine to come up
	I1212 23:01:10.469067  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:10.469614  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:10.469648  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:10.469577  112199 retry.go:31] will retry after 874.630738ms: waiting for machine to come up
	I1212 23:01:11.345424  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:11.345894  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:11.345924  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:11.345825  112199 retry.go:31] will retry after 757.45661ms: waiting for machine to come up
	I1212 23:01:12.104669  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:12.105232  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:12.105258  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:12.105202  112199 retry.go:31] will retry after 916.048583ms: waiting for machine to come up
	I1212 23:01:13.022915  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:13.023383  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:13.023414  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:13.023337  112199 retry.go:31] will retry after 1.203968552s: waiting for machine to come up
	I1212 23:01:14.228823  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:14.229375  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:14.229403  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:14.229308  112199 retry.go:31] will retry after 2.0982135s: waiting for machine to come up
	I1212 23:01:16.328840  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:16.329269  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:16.329297  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:16.329215  112199 retry.go:31] will retry after 1.751570027s: waiting for machine to come up
	I1212 23:01:18.082252  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:18.082684  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:18.082715  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:18.082642  112199 retry.go:31] will retry after 2.320513908s: waiting for machine to come up
	I1212 23:01:20.404602  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:20.405173  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:20.405228  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:20.405110  112199 retry.go:31] will retry after 3.763996523s: waiting for machine to come up
	I1212 23:01:24.170428  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:24.170872  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:24.170904  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:24.170815  112199 retry.go:31] will retry after 4.093307599s: waiting for machine to come up
	I1212 23:01:28.265580  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:28.266123  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:28.266156  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:28.266060  112199 retry.go:31] will retry after 4.368743857s: waiting for machine to come up
	I1212 23:01:32.636509  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:32.637013  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | unable to find current IP address of domain stopped-upgrade-809686 in network minikube-net
	I1212 23:01:32.637045  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | I1212 23:01:32.636963  112199 retry.go:31] will retry after 6.796983707s: waiting for machine to come up
	I1212 23:01:39.436466  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.436999  111680 main.go:141] libmachine: (stopped-upgrade-809686) Found IP for machine: 192.168.50.98
	I1212 23:01:39.437025  111680 main.go:141] libmachine: (stopped-upgrade-809686) Reserving static IP address...
	I1212 23:01:39.437043  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has current primary IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.437469  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "stopped-upgrade-809686", mac: "52:54:00:72:52:91", ip: "192.168.50.98"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:39.437516  111680 main.go:141] libmachine: (stopped-upgrade-809686) Reserved static IP address: 192.168.50.98
	I1212 23:01:39.437541  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-809686", mac: "52:54:00:72:52:91", ip: "192.168.50.98"}
	I1212 23:01:39.437558  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | Getting to WaitForSSH function...
	I1212 23:01:39.437577  111680 main.go:141] libmachine: (stopped-upgrade-809686) Waiting for SSH to be available...
	I1212 23:01:39.439815  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.440152  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:39.440185  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.440279  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | Using SSH client type: external
	I1212 23:01:39.440302  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/stopped-upgrade-809686/id_rsa (-rw-------)
	I1212 23:01:39.440347  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/stopped-upgrade-809686/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:01:39.440367  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | About to run SSH command:
	I1212 23:01:39.440383  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | exit 0
	I1212 23:01:39.570975  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | SSH cmd err, output: <nil>: 
	I1212 23:01:39.571362  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetConfigRaw
	I1212 23:01:39.572163  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetIP
	I1212 23:01:39.575178  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.575638  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:39.575677  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.575936  111680 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/stopped-upgrade-809686/config.json ...
	I1212 23:01:39.576186  111680 machine.go:88] provisioning docker machine ...
	I1212 23:01:39.576215  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:01:39.576466  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetMachineName
	I1212 23:01:39.576672  111680 buildroot.go:166] provisioning hostname "stopped-upgrade-809686"
	I1212 23:01:39.576701  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetMachineName
	I1212 23:01:39.576881  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:39.579738  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.580130  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:39.580148  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.580299  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHPort
	I1212 23:01:39.580499  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:39.580675  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:39.580822  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHUsername
	I1212 23:01:39.580988  111680 main.go:141] libmachine: Using SSH client type: native
	I1212 23:01:39.581377  111680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.98 22 <nil> <nil>}
	I1212 23:01:39.581392  111680 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-809686 && echo "stopped-upgrade-809686" | sudo tee /etc/hostname
	I1212 23:01:39.706033  111680 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-809686
	
	I1212 23:01:39.706086  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:39.709100  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.709513  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:39.709555  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.709663  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHPort
	I1212 23:01:39.709920  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:39.710102  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:39.710255  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHUsername
	I1212 23:01:39.710455  111680 main.go:141] libmachine: Using SSH client type: native
	I1212 23:01:39.710791  111680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.98 22 <nil> <nil>}
	I1212 23:01:39.710809  111680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-809686' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-809686/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-809686' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:01:39.831356  111680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:01:39.831404  111680 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:01:39.831449  111680 buildroot.go:174] setting up certificates
	I1212 23:01:39.831466  111680 provision.go:83] configureAuth start
	I1212 23:01:39.831483  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetMachineName
	I1212 23:01:39.831753  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetIP
	I1212 23:01:39.834477  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.834819  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:39.834856  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.834999  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:39.837448  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.837782  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:39.837809  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:39.837959  111680 provision.go:138] copyHostCerts
	I1212 23:01:39.838032  111680 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:01:39.838046  111680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:01:39.838121  111680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:01:39.838241  111680 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:01:39.838253  111680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:01:39.838291  111680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:01:39.838379  111680 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:01:39.838388  111680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:01:39.838422  111680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:01:39.838539  111680 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-809686 san=[192.168.50.98 192.168.50.98 localhost 127.0.0.1 minikube stopped-upgrade-809686]
	I1212 23:01:40.034601  111680 provision.go:172] copyRemoteCerts
	I1212 23:01:40.034728  111680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:01:40.034759  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:40.037819  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:40.038206  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:40.038246  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:40.038386  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHPort
	I1212 23:01:40.038592  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:40.038771  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHUsername
	I1212 23:01:40.038902  111680 sshutil.go:53] new ssh client: &{IP:192.168.50.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/stopped-upgrade-809686/id_rsa Username:docker}
	I1212 23:01:40.126959  111680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:01:40.142775  111680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 23:01:40.157682  111680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:01:40.173406  111680 provision.go:86] duration metric: configureAuth took 341.922725ms
	I1212 23:01:40.173437  111680 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:01:40.173671  111680 config.go:182] Loaded profile config "stopped-upgrade-809686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 23:01:40.173776  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:40.176503  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:40.176949  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:40.176992  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:40.177148  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHPort
	I1212 23:01:40.177370  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:40.177552  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:40.177655  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHUsername
	I1212 23:01:40.177814  111680 main.go:141] libmachine: Using SSH client type: native
	I1212 23:01:40.178174  111680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.98 22 <nil> <nil>}
	I1212 23:01:40.178194  111680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:01:48.281642  111680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:01:48.281670  111680 machine.go:91] provisioned docker machine in 8.705465524s
	I1212 23:01:48.281685  111680 start.go:300] post-start starting for "stopped-upgrade-809686" (driver="kvm2")
	I1212 23:01:48.281699  111680 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:01:48.281722  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:01:48.282057  111680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:01:48.282099  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:48.284726  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.285193  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:48.285231  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.285448  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHPort
	I1212 23:01:48.285666  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:48.285841  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHUsername
	I1212 23:01:48.286014  111680 sshutil.go:53] new ssh client: &{IP:192.168.50.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/stopped-upgrade-809686/id_rsa Username:docker}
	I1212 23:01:48.375940  111680 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:01:48.380959  111680 info.go:137] Remote host: Buildroot 2019.02.7
	I1212 23:01:48.380988  111680 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:01:48.381069  111680 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:01:48.381185  111680 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:01:48.381302  111680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:01:48.388450  111680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:01:48.402753  111680 start.go:303] post-start completed in 121.049863ms
	I1212 23:01:48.402786  111680 fix.go:56] fixHost completed within 41.186197261s
	I1212 23:01:48.402815  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:48.405477  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.405800  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:48.405827  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.405942  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHPort
	I1212 23:01:48.406149  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:48.406338  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:48.406477  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHUsername
	I1212 23:01:48.406650  111680 main.go:141] libmachine: Using SSH client type: native
	I1212 23:01:48.406957  111680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.98 22 <nil> <nil>}
	I1212 23:01:48.406968  111680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 23:01:48.523787  111680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702422108.457893944
	
	I1212 23:01:48.523817  111680 fix.go:206] guest clock: 1702422108.457893944
	I1212 23:01:48.523825  111680 fix.go:219] Guest: 2023-12-12 23:01:48.457893944 +0000 UTC Remote: 2023-12-12 23:01:48.402791006 +0000 UTC m=+75.017874625 (delta=55.102938ms)
	I1212 23:01:48.523883  111680 fix.go:190] guest clock delta is within tolerance: 55.102938ms
	I1212 23:01:48.523893  111680 start.go:83] releasing machines lock for "stopped-upgrade-809686", held for 41.307338107s
	I1212 23:01:48.523927  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:01:48.524182  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetIP
	I1212 23:01:48.527416  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.527838  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:48.527876  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.528011  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:01:48.528610  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:01:48.528851  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .DriverName
	I1212 23:01:48.528913  111680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:01:48.528963  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:48.529181  111680 ssh_runner.go:195] Run: cat /version.json
	I1212 23:01:48.529210  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHHostname
	I1212 23:01:48.532046  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.532271  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.532493  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:48.532526  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.532681  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:52:91", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-13 00:01:34 +0000 UTC Type:0 Mac:52:54:00:72:52:91 Iaid: IPaddr:192.168.50.98 Prefix:24 Hostname:stopped-upgrade-809686 Clientid:01:52:54:00:72:52:91}
	I1212 23:01:48.532712  111680 main.go:141] libmachine: (stopped-upgrade-809686) DBG | domain stopped-upgrade-809686 has defined IP address 192.168.50.98 and MAC address 52:54:00:72:52:91 in network minikube-net
	I1212 23:01:48.532744  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHPort
	I1212 23:01:48.532886  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHPort
	I1212 23:01:48.532986  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:48.533048  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHKeyPath
	I1212 23:01:48.533130  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHUsername
	I1212 23:01:48.533140  111680 main.go:141] libmachine: (stopped-upgrade-809686) Calling .GetSSHUsername
	I1212 23:01:48.533287  111680 sshutil.go:53] new ssh client: &{IP:192.168.50.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/stopped-upgrade-809686/id_rsa Username:docker}
	I1212 23:01:48.533310  111680 sshutil.go:53] new ssh client: &{IP:192.168.50.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/stopped-upgrade-809686/id_rsa Username:docker}
	W1212 23:01:48.652777  111680 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 23:01:48.652864  111680 ssh_runner.go:195] Run: systemctl --version
	I1212 23:01:48.659987  111680 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:01:48.897388  111680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:01:48.903460  111680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:01:48.903554  111680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:01:48.909107  111680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 23:01:48.909139  111680 start.go:475] detecting cgroup driver to use...
	I1212 23:01:48.909233  111680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:01:48.919419  111680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:01:48.929540  111680 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:01:48.929612  111680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:01:48.938441  111680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:01:48.947922  111680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 23:01:48.956909  111680 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 23:01:48.956977  111680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:01:49.052155  111680 docker.go:219] disabling docker service ...
	I1212 23:01:49.052248  111680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:01:49.066149  111680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:01:49.074553  111680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:01:49.181676  111680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:01:49.278130  111680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:01:49.288380  111680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:01:49.300591  111680 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 23:01:49.300661  111680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:01:49.310517  111680 out.go:177] 
	W1212 23:01:49.311983  111680 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 23:01:49.312010  111680 out.go:239] * 
	* 
	W1212 23:01:49.313374  111680 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:01:49.314899  111680 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-809686 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (306.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-809120 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-809120 --alsologtostderr -v=3: exit status 82 (2m1.280219937s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-809120"  ...
	* Stopping node "embed-certs-809120"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:09:10.745111  126734 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:09:10.745238  126734 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:10.745250  126734 out.go:309] Setting ErrFile to fd 2...
	I1212 23:09:10.745257  126734 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:10.745469  126734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:09:10.745734  126734 out.go:303] Setting JSON to false
	I1212 23:09:10.745821  126734 mustload.go:65] Loading cluster: embed-certs-809120
	I1212 23:09:10.746164  126734 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:09:10.746235  126734 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/config.json ...
	I1212 23:09:10.746412  126734 mustload.go:65] Loading cluster: embed-certs-809120
	I1212 23:09:10.746532  126734 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:09:10.746601  126734 stop.go:39] StopHost: embed-certs-809120
	I1212 23:09:10.747084  126734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:09:10.747144  126734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:09:10.765080  126734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I1212 23:09:10.765715  126734 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:09:10.766567  126734 main.go:141] libmachine: Using API Version  1
	I1212 23:09:10.766619  126734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:09:10.767233  126734 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:09:10.769614  126734 out.go:177] * Stopping node "embed-certs-809120"  ...
	I1212 23:09:10.771044  126734 main.go:141] libmachine: Stopping "embed-certs-809120"...
	I1212 23:09:10.771114  126734 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:09:10.773312  126734 main.go:141] libmachine: (embed-certs-809120) Calling .Stop
	I1212 23:09:10.778058  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 0/60
	I1212 23:09:11.779680  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 1/60
	I1212 23:09:12.781997  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 2/60
	I1212 23:09:13.783525  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 3/60
	I1212 23:09:14.786190  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 4/60
	I1212 23:09:15.788520  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 5/60
	I1212 23:09:16.790127  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 6/60
	I1212 23:09:17.791913  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 7/60
	I1212 23:09:18.793828  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 8/60
	I1212 23:09:19.795478  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 9/60
	I1212 23:09:20.798209  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 10/60
	I1212 23:09:21.799695  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 11/60
	I1212 23:09:22.801176  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 12/60
	I1212 23:09:23.802857  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 13/60
	I1212 23:09:24.804489  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 14/60
	I1212 23:09:25.806568  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 15/60
	I1212 23:09:26.808029  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 16/60
	I1212 23:09:27.809724  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 17/60
	I1212 23:09:28.811484  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 18/60
	I1212 23:09:29.814068  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 19/60
	I1212 23:09:30.816260  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 20/60
	I1212 23:09:31.818347  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 21/60
	I1212 23:09:32.819802  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 22/60
	I1212 23:09:33.821230  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 23/60
	I1212 23:09:34.822538  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 24/60
	I1212 23:09:35.824432  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 25/60
	I1212 23:09:36.826003  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 26/60
	I1212 23:09:37.827747  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 27/60
	I1212 23:09:38.829898  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 28/60
	I1212 23:09:39.831338  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 29/60
	I1212 23:09:40.833453  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 30/60
	I1212 23:09:41.835499  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 31/60
	I1212 23:09:42.837863  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 32/60
	I1212 23:09:43.839353  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 33/60
	I1212 23:09:44.840882  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 34/60
	I1212 23:09:45.843165  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 35/60
	I1212 23:09:46.844774  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 36/60
	I1212 23:09:47.846067  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 37/60
	I1212 23:09:48.847600  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 38/60
	I1212 23:09:49.849053  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 39/60
	I1212 23:09:50.851206  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 40/60
	I1212 23:09:51.852438  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 41/60
	I1212 23:09:52.853826  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 42/60
	I1212 23:09:53.855083  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 43/60
	I1212 23:09:54.856546  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 44/60
	I1212 23:09:55.858312  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 45/60
	I1212 23:09:56.859759  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 46/60
	I1212 23:09:57.861291  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 47/60
	I1212 23:09:58.863020  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 48/60
	I1212 23:09:59.864496  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 49/60
	I1212 23:10:00.866701  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 50/60
	I1212 23:10:01.868141  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 51/60
	I1212 23:10:02.869607  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 52/60
	I1212 23:10:03.871372  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 53/60
	I1212 23:10:04.872912  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 54/60
	I1212 23:10:05.874890  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 55/60
	I1212 23:10:06.876244  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 56/60
	I1212 23:10:07.877545  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 57/60
	I1212 23:10:08.878996  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 58/60
	I1212 23:10:09.880432  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 59/60
	I1212 23:10:10.881749  126734 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:10:10.881806  126734 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:10:10.881824  126734 retry.go:31] will retry after 945.219824ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:10:11.827935  126734 stop.go:39] StopHost: embed-certs-809120
	I1212 23:10:11.828514  126734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:10:11.828576  126734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:10:11.842882  126734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37773
	I1212 23:10:11.843352  126734 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:10:11.843836  126734 main.go:141] libmachine: Using API Version  1
	I1212 23:10:11.843871  126734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:10:11.844159  126734 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:10:11.846247  126734 out.go:177] * Stopping node "embed-certs-809120"  ...
	I1212 23:10:11.847725  126734 main.go:141] libmachine: Stopping "embed-certs-809120"...
	I1212 23:10:11.847741  126734 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:10:11.849281  126734 main.go:141] libmachine: (embed-certs-809120) Calling .Stop
	I1212 23:10:11.852608  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 0/60
	I1212 23:10:12.854432  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 1/60
	I1212 23:10:13.856171  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 2/60
	I1212 23:10:14.857649  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 3/60
	I1212 23:10:15.859548  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 4/60
	I1212 23:10:16.861412  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 5/60
	I1212 23:10:17.862783  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 6/60
	I1212 23:10:18.864269  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 7/60
	I1212 23:10:19.865778  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 8/60
	I1212 23:10:20.867183  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 9/60
	I1212 23:10:21.869356  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 10/60
	I1212 23:10:22.871056  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 11/60
	I1212 23:10:23.872484  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 12/60
	I1212 23:10:24.873939  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 13/60
	I1212 23:10:25.875281  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 14/60
	I1212 23:10:26.877217  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 15/60
	I1212 23:10:27.878575  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 16/60
	I1212 23:10:28.880083  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 17/60
	I1212 23:10:29.881369  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 18/60
	I1212 23:10:30.882920  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 19/60
	I1212 23:10:31.884605  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 20/60
	I1212 23:10:32.886329  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 21/60
	I1212 23:10:33.887829  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 22/60
	I1212 23:10:34.889242  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 23/60
	I1212 23:10:35.890502  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 24/60
	I1212 23:10:36.892191  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 25/60
	I1212 23:10:37.893422  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 26/60
	I1212 23:10:38.894957  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 27/60
	I1212 23:10:39.896531  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 28/60
	I1212 23:10:40.897884  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 29/60
	I1212 23:10:41.899725  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 30/60
	I1212 23:10:42.901997  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 31/60
	I1212 23:10:43.903427  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 32/60
	I1212 23:10:44.904965  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 33/60
	I1212 23:10:45.906339  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 34/60
	I1212 23:10:46.908149  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 35/60
	I1212 23:10:47.909687  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 36/60
	I1212 23:10:48.911047  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 37/60
	I1212 23:10:49.912584  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 38/60
	I1212 23:10:50.913885  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 39/60
	I1212 23:10:51.916282  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 40/60
	I1212 23:10:52.917769  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 41/60
	I1212 23:10:53.919229  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 42/60
	I1212 23:10:54.920526  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 43/60
	I1212 23:10:55.921955  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 44/60
	I1212 23:10:56.923981  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 45/60
	I1212 23:10:57.925708  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 46/60
	I1212 23:10:58.927124  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 47/60
	I1212 23:10:59.928674  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 48/60
	I1212 23:11:00.930021  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 49/60
	I1212 23:11:01.931944  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 50/60
	I1212 23:11:02.933390  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 51/60
	I1212 23:11:03.934807  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 52/60
	I1212 23:11:04.936233  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 53/60
	I1212 23:11:05.937567  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 54/60
	I1212 23:11:06.939156  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 55/60
	I1212 23:11:07.940531  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 56/60
	I1212 23:11:08.942057  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 57/60
	I1212 23:11:09.943571  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 58/60
	I1212 23:11:10.944903  126734 main.go:141] libmachine: (embed-certs-809120) Waiting for machine to stop 59/60
	I1212 23:11:11.945766  126734 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:11:11.945822  126734 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:11:11.948080  126734 out.go:177] 
	W1212 23:11:11.949808  126734 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 23:11:11.949831  126734 out.go:239] * 
	* 
	W1212 23:11:11.953449  126734 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:11:11.954966  126734 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-809120 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120
E1212 23:11:12.253137   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:11:12.688423   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:15.249012   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:20.369896   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120: exit status 3 (18.44723274s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:11:30.403556  127518 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E1212 23:11:30.403576  127518 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-809120" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-549640 --alsologtostderr -v=3
E1212 23:09:22.045170   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-549640 --alsologtostderr -v=3: exit status 82 (2m1.051617872s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-549640"  ...
	* Stopping node "old-k8s-version-549640"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:09:20.609461  126900 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:09:20.609774  126900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:20.609785  126900 out.go:309] Setting ErrFile to fd 2...
	I1212 23:09:20.609791  126900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:20.610018  126900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:09:20.610321  126900 out.go:303] Setting JSON to false
	I1212 23:09:20.610433  126900 mustload.go:65] Loading cluster: old-k8s-version-549640
	I1212 23:09:20.610956  126900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:09:20.611060  126900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/config.json ...
	I1212 23:09:20.611305  126900 mustload.go:65] Loading cluster: old-k8s-version-549640
	I1212 23:09:20.611471  126900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:09:20.611519  126900 stop.go:39] StopHost: old-k8s-version-549640
	I1212 23:09:20.612064  126900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:09:20.612129  126900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:09:20.628474  126900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33193
	I1212 23:09:20.628995  126900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:09:20.629769  126900 main.go:141] libmachine: Using API Version  1
	I1212 23:09:20.629811  126900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:09:20.630314  126900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:09:20.633337  126900 out.go:177] * Stopping node "old-k8s-version-549640"  ...
	I1212 23:09:20.634904  126900 main.go:141] libmachine: Stopping "old-k8s-version-549640"...
	I1212 23:09:20.634962  126900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:09:20.636849  126900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Stop
	I1212 23:09:20.640833  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 0/60
	I1212 23:09:21.642340  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 1/60
	I1212 23:09:22.643942  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 2/60
	I1212 23:09:23.645953  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 3/60
	I1212 23:09:24.647647  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 4/60
	I1212 23:09:25.649318  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 5/60
	I1212 23:09:26.650866  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 6/60
	I1212 23:09:27.652375  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 7/60
	I1212 23:09:28.653858  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 8/60
	I1212 23:09:29.655561  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 9/60
	I1212 23:09:30.657352  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 10/60
	I1212 23:09:31.659042  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 11/60
	I1212 23:09:32.660354  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 12/60
	I1212 23:09:33.661974  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 13/60
	I1212 23:09:34.663301  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 14/60
	I1212 23:09:35.665529  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 15/60
	I1212 23:09:36.667683  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 16/60
	I1212 23:09:37.669241  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 17/60
	I1212 23:09:38.671220  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 18/60
	I1212 23:09:39.672923  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 19/60
	I1212 23:09:40.675339  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 20/60
	I1212 23:09:41.676808  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 21/60
	I1212 23:09:42.678416  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 22/60
	I1212 23:09:43.679973  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 23/60
	I1212 23:09:44.681822  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 24/60
	I1212 23:09:45.684127  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 25/60
	I1212 23:09:46.686163  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 26/60
	I1212 23:09:47.687565  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 27/60
	I1212 23:09:48.689013  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 28/60
	I1212 23:09:49.690373  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 29/60
	I1212 23:09:50.692595  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 30/60
	I1212 23:09:51.694042  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 31/60
	I1212 23:09:52.695429  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 32/60
	I1212 23:09:53.698073  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 33/60
	I1212 23:09:54.699467  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 34/60
	I1212 23:09:55.701278  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 35/60
	I1212 23:09:56.702778  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 36/60
	I1212 23:09:57.704341  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 37/60
	I1212 23:09:58.705979  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 38/60
	I1212 23:09:59.707679  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 39/60
	I1212 23:10:00.709954  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 40/60
	I1212 23:10:01.711281  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 41/60
	I1212 23:10:02.712587  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 42/60
	I1212 23:10:03.713930  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 43/60
	I1212 23:10:04.715359  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 44/60
	I1212 23:10:05.717532  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 45/60
	I1212 23:10:06.718894  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 46/60
	I1212 23:10:07.720210  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 47/60
	I1212 23:10:08.721627  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 48/60
	I1212 23:10:09.722838  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 49/60
	I1212 23:10:10.725116  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 50/60
	I1212 23:10:11.726467  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 51/60
	I1212 23:10:12.727970  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 52/60
	I1212 23:10:13.729800  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 53/60
	I1212 23:10:14.731425  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 54/60
	I1212 23:10:15.733635  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 55/60
	I1212 23:10:16.735200  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 56/60
	I1212 23:10:17.736583  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 57/60
	I1212 23:10:18.737920  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 58/60
	I1212 23:10:19.739531  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 59/60
	I1212 23:10:20.741127  126900 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:10:20.741209  126900 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:10:20.741232  126900 retry.go:31] will retry after 728.299803ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:10:21.470099  126900 stop.go:39] StopHost: old-k8s-version-549640
	I1212 23:10:21.470476  126900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:10:21.470524  126900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:10:21.484853  126900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I1212 23:10:21.485343  126900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:10:21.485957  126900 main.go:141] libmachine: Using API Version  1
	I1212 23:10:21.485984  126900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:10:21.486294  126900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:10:21.488429  126900 out.go:177] * Stopping node "old-k8s-version-549640"  ...
	I1212 23:10:21.490099  126900 main.go:141] libmachine: Stopping "old-k8s-version-549640"...
	I1212 23:10:21.490114  126900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:10:21.491683  126900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Stop
	I1212 23:10:21.495487  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 0/60
	I1212 23:10:22.496973  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 1/60
	I1212 23:10:23.498376  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 2/60
	I1212 23:10:24.499978  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 3/60
	I1212 23:10:25.501405  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 4/60
	I1212 23:10:26.503049  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 5/60
	I1212 23:10:27.504966  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 6/60
	I1212 23:10:28.506923  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 7/60
	I1212 23:10:29.508272  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 8/60
	I1212 23:10:30.509650  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 9/60
	I1212 23:10:31.511508  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 10/60
	I1212 23:10:32.512999  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 11/60
	I1212 23:10:33.514346  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 12/60
	I1212 23:10:34.515727  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 13/60
	I1212 23:10:35.517189  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 14/60
	I1212 23:10:36.518570  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 15/60
	I1212 23:10:37.519879  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 16/60
	I1212 23:10:38.521556  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 17/60
	I1212 23:10:39.522822  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 18/60
	I1212 23:10:40.524146  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 19/60
	I1212 23:10:41.526059  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 20/60
	I1212 23:10:42.527624  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 21/60
	I1212 23:10:43.529167  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 22/60
	I1212 23:10:44.530433  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 23/60
	I1212 23:10:45.531999  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 24/60
	I1212 23:10:46.533522  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 25/60
	I1212 23:10:47.534915  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 26/60
	I1212 23:10:48.536361  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 27/60
	I1212 23:10:49.537656  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 28/60
	I1212 23:10:50.539131  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 29/60
	I1212 23:10:51.540778  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 30/60
	I1212 23:10:52.541961  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 31/60
	I1212 23:10:53.543568  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 32/60
	I1212 23:10:54.544901  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 33/60
	I1212 23:10:55.546535  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 34/60
	I1212 23:10:56.548152  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 35/60
	I1212 23:10:57.549446  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 36/60
	I1212 23:10:58.550871  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 37/60
	I1212 23:10:59.552038  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 38/60
	I1212 23:11:00.553674  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 39/60
	I1212 23:11:01.555514  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 40/60
	I1212 23:11:02.557132  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 41/60
	I1212 23:11:03.558512  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 42/60
	I1212 23:11:04.559940  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 43/60
	I1212 23:11:05.561242  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 44/60
	I1212 23:11:06.562952  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 45/60
	I1212 23:11:07.564460  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 46/60
	I1212 23:11:08.565971  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 47/60
	I1212 23:11:09.567363  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 48/60
	I1212 23:11:10.568828  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 49/60
	I1212 23:11:11.570499  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 50/60
	I1212 23:11:12.571758  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 51/60
	I1212 23:11:13.573297  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 52/60
	I1212 23:11:14.574780  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 53/60
	I1212 23:11:15.576315  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 54/60
	I1212 23:11:16.577964  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 55/60
	I1212 23:11:17.579346  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 56/60
	I1212 23:11:18.581069  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 57/60
	I1212 23:11:19.583088  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 58/60
	I1212 23:11:20.584565  126900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for machine to stop 59/60
	I1212 23:11:21.585606  126900 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:11:21.585655  126900 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:11:21.587638  126900 out.go:177] 
	W1212 23:11:21.589026  126900 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 23:11:21.589040  126900 out.go:239] * 
	* 
	W1212 23:11:21.592353  126900 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:11:21.593669  126900 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-549640 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640
E1212 23:11:22.620039   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:11:24.127148   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640: exit status 3 (18.536611119s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:11:40.131545  127572 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.146:22: connect: no route to host
	E1212 23:11:40.131565  127572 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.146:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-549640" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-115023 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-115023 --alsologtostderr -v=3: exit status 82 (2m1.242094498s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-115023"  ...
	* Stopping node "no-preload-115023"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:09:47.340552  127097 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:09:47.340875  127097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:47.340885  127097 out.go:309] Setting ErrFile to fd 2...
	I1212 23:09:47.340890  127097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:47.341183  127097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:09:47.341482  127097 out.go:303] Setting JSON to false
	I1212 23:09:47.341563  127097 mustload.go:65] Loading cluster: no-preload-115023
	I1212 23:09:47.342001  127097 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:09:47.342077  127097 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/config.json ...
	I1212 23:09:47.342500  127097 mustload.go:65] Loading cluster: no-preload-115023
	I1212 23:09:47.342637  127097 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:09:47.342678  127097 stop.go:39] StopHost: no-preload-115023
	I1212 23:09:47.343327  127097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:09:47.343394  127097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:09:47.359553  127097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I1212 23:09:47.360098  127097 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:09:47.360791  127097 main.go:141] libmachine: Using API Version  1
	I1212 23:09:47.360817  127097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:09:47.361307  127097 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:09:47.363620  127097 out.go:177] * Stopping node "no-preload-115023"  ...
	I1212 23:09:47.365370  127097 main.go:141] libmachine: Stopping "no-preload-115023"...
	I1212 23:09:47.365397  127097 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:09:47.367383  127097 main.go:141] libmachine: (no-preload-115023) Calling .Stop
	I1212 23:09:47.370796  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 0/60
	I1212 23:09:48.372447  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 1/60
	I1212 23:09:49.374063  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 2/60
	I1212 23:09:50.375432  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 3/60
	I1212 23:09:51.376866  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 4/60
	I1212 23:09:52.378614  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 5/60
	I1212 23:09:53.379981  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 6/60
	I1212 23:09:54.381315  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 7/60
	I1212 23:09:55.382691  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 8/60
	I1212 23:09:56.384391  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 9/60
	I1212 23:09:57.386844  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 10/60
	I1212 23:09:58.389169  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 11/60
	I1212 23:09:59.390537  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 12/60
	I1212 23:10:00.392028  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 13/60
	I1212 23:10:01.393304  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 14/60
	I1212 23:10:02.395177  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 15/60
	I1212 23:10:03.396538  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 16/60
	I1212 23:10:04.397973  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 17/60
	I1212 23:10:05.399337  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 18/60
	I1212 23:10:06.400686  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 19/60
	I1212 23:10:07.402901  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 20/60
	I1212 23:10:08.404312  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 21/60
	I1212 23:10:09.405682  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 22/60
	I1212 23:10:10.407139  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 23/60
	I1212 23:10:11.408537  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 24/60
	I1212 23:10:12.410636  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 25/60
	I1212 23:10:13.412219  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 26/60
	I1212 23:10:14.413828  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 27/60
	I1212 23:10:15.415514  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 28/60
	I1212 23:10:16.416934  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 29/60
	I1212 23:10:17.418182  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 30/60
	I1212 23:10:18.419896  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 31/60
	I1212 23:10:19.421404  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 32/60
	I1212 23:10:20.422879  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 33/60
	I1212 23:10:21.424526  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 34/60
	I1212 23:10:22.426768  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 35/60
	I1212 23:10:23.428352  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 36/60
	I1212 23:10:24.429924  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 37/60
	I1212 23:10:25.431516  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 38/60
	I1212 23:10:26.433018  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 39/60
	I1212 23:10:27.435233  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 40/60
	I1212 23:10:28.436787  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 41/60
	I1212 23:10:29.438161  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 42/60
	I1212 23:10:30.439593  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 43/60
	I1212 23:10:31.440885  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 44/60
	I1212 23:10:32.443133  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 45/60
	I1212 23:10:33.444659  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 46/60
	I1212 23:10:34.446028  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 47/60
	I1212 23:10:35.447600  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 48/60
	I1212 23:10:36.448930  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 49/60
	I1212 23:10:37.450632  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 50/60
	I1212 23:10:38.452178  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 51/60
	I1212 23:10:39.453762  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 52/60
	I1212 23:10:40.455326  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 53/60
	I1212 23:10:41.456696  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 54/60
	I1212 23:10:42.458890  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 55/60
	I1212 23:10:43.460373  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 56/60
	I1212 23:10:44.461715  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 57/60
	I1212 23:10:45.463179  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 58/60
	I1212 23:10:46.464553  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 59/60
	I1212 23:10:47.465933  127097 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:10:47.465983  127097 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:10:47.466003  127097 retry.go:31] will retry after 924.314662ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:10:48.391108  127097 stop.go:39] StopHost: no-preload-115023
	I1212 23:10:48.391512  127097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:10:48.391569  127097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:10:48.405746  127097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
	I1212 23:10:48.406218  127097 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:10:48.406746  127097 main.go:141] libmachine: Using API Version  1
	I1212 23:10:48.406777  127097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:10:48.407111  127097 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:10:48.409197  127097 out.go:177] * Stopping node "no-preload-115023"  ...
	I1212 23:10:48.410569  127097 main.go:141] libmachine: Stopping "no-preload-115023"...
	I1212 23:10:48.410587  127097 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:10:48.412308  127097 main.go:141] libmachine: (no-preload-115023) Calling .Stop
	I1212 23:10:48.415768  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 0/60
	I1212 23:10:49.417090  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 1/60
	I1212 23:10:50.418588  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 2/60
	I1212 23:10:51.420162  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 3/60
	I1212 23:10:52.421706  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 4/60
	I1212 23:10:53.423500  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 5/60
	I1212 23:10:54.424937  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 6/60
	I1212 23:10:55.426380  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 7/60
	I1212 23:10:56.427885  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 8/60
	I1212 23:10:57.429205  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 9/60
	I1212 23:10:58.431043  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 10/60
	I1212 23:10:59.432434  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 11/60
	I1212 23:11:00.433739  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 12/60
	I1212 23:11:01.435280  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 13/60
	I1212 23:11:02.436736  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 14/60
	I1212 23:11:03.438341  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 15/60
	I1212 23:11:04.439869  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 16/60
	I1212 23:11:05.441298  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 17/60
	I1212 23:11:06.442786  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 18/60
	I1212 23:11:07.444282  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 19/60
	I1212 23:11:08.445957  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 20/60
	I1212 23:11:09.447197  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 21/60
	I1212 23:11:10.448722  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 22/60
	I1212 23:11:11.449999  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 23/60
	I1212 23:11:12.451412  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 24/60
	I1212 23:11:13.453023  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 25/60
	I1212 23:11:14.454275  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 26/60
	I1212 23:11:15.455740  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 27/60
	I1212 23:11:16.457794  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 28/60
	I1212 23:11:17.459437  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 29/60
	I1212 23:11:18.461883  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 30/60
	I1212 23:11:19.463296  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 31/60
	I1212 23:11:20.464859  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 32/60
	I1212 23:11:21.466212  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 33/60
	I1212 23:11:22.467568  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 34/60
	I1212 23:11:23.469452  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 35/60
	I1212 23:11:24.470864  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 36/60
	I1212 23:11:25.472467  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 37/60
	I1212 23:11:26.473987  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 38/60
	I1212 23:11:27.475302  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 39/60
	I1212 23:11:28.477065  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 40/60
	I1212 23:11:29.478530  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 41/60
	I1212 23:11:30.479769  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 42/60
	I1212 23:11:31.481104  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 43/60
	I1212 23:11:32.482352  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 44/60
	I1212 23:11:33.484181  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 45/60
	I1212 23:11:34.485619  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 46/60
	I1212 23:11:35.486949  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 47/60
	I1212 23:11:36.488354  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 48/60
	I1212 23:11:37.489605  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 49/60
	I1212 23:11:38.491346  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 50/60
	I1212 23:11:39.492677  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 51/60
	I1212 23:11:40.493923  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 52/60
	I1212 23:11:41.495192  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 53/60
	I1212 23:11:42.496402  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 54/60
	I1212 23:11:43.498036  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 55/60
	I1212 23:11:44.499450  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 56/60
	I1212 23:11:45.500951  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 57/60
	I1212 23:11:46.502280  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 58/60
	I1212 23:11:47.503822  127097 main.go:141] libmachine: (no-preload-115023) Waiting for machine to stop 59/60
	I1212 23:11:48.504702  127097 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:11:48.504756  127097 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:11:48.506555  127097 out.go:177] 
	W1212 23:11:48.508023  127097 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 23:11:48.508041  127097 out.go:239] * 
	* 
	W1212 23:11:48.511398  127097 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:11:48.512874  127097 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-115023 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023: exit status 3 (18.497217025s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:12:07.011633  127829 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host
	E1212 23:12:07.011657  127829 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-115023" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-850839 --alsologtostderr -v=3
E1212 23:10:02.203935   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:02.209227   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:02.219513   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:02.239798   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:02.280093   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:02.360441   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:02.521629   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:02.841867   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:03.482949   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:04.763912   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:07.324102   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:12.444979   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:22.685808   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:23.485666   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:10:25.171711   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 23:10:43.166822   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:10:51.771545   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:51.776804   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:51.787073   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:51.807323   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:51.848320   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:51.928534   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:52.088988   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:52.409754   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:53.050199   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:54.330962   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:10:56.891476   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:11:02.012389   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:11:10.129220   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:10.134550   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:10.144857   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:10.165148   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:10.205539   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:10.286035   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:10.446468   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:10.767341   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:11.407874   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-850839 --alsologtostderr -v=3: exit status 82 (2m1.32908833s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-850839"  ...
	* Stopping node "default-k8s-diff-port-850839"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:09:58.696662  127254 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:09:58.696973  127254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:58.696986  127254 out.go:309] Setting ErrFile to fd 2...
	I1212 23:09:58.696991  127254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:09:58.697248  127254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:09:58.697546  127254 out.go:303] Setting JSON to false
	I1212 23:09:58.697655  127254 mustload.go:65] Loading cluster: default-k8s-diff-port-850839
	I1212 23:09:58.698191  127254 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:09:58.698311  127254 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:09:58.698543  127254 mustload.go:65] Loading cluster: default-k8s-diff-port-850839
	I1212 23:09:58.698721  127254 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:09:58.698769  127254 stop.go:39] StopHost: default-k8s-diff-port-850839
	I1212 23:09:58.699402  127254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:09:58.699490  127254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:09:58.715009  127254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I1212 23:09:58.715678  127254 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:09:58.716387  127254 main.go:141] libmachine: Using API Version  1
	I1212 23:09:58.716423  127254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:09:58.716799  127254 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:09:58.719601  127254 out.go:177] * Stopping node "default-k8s-diff-port-850839"  ...
	I1212 23:09:58.721234  127254 main.go:141] libmachine: Stopping "default-k8s-diff-port-850839"...
	I1212 23:09:58.721254  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:09:58.723274  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Stop
	I1212 23:09:58.726614  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 0/60
	I1212 23:09:59.728845  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 1/60
	I1212 23:10:00.730030  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 2/60
	I1212 23:10:01.731416  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 3/60
	I1212 23:10:02.733535  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 4/60
	I1212 23:10:03.735423  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 5/60
	I1212 23:10:04.736588  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 6/60
	I1212 23:10:05.737803  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 7/60
	I1212 23:10:06.739040  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 8/60
	I1212 23:10:07.740283  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 9/60
	I1212 23:10:08.742361  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 10/60
	I1212 23:10:09.743608  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 11/60
	I1212 23:10:10.744859  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 12/60
	I1212 23:10:11.746307  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 13/60
	I1212 23:10:12.748270  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 14/60
	I1212 23:10:13.750137  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 15/60
	I1212 23:10:14.751710  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 16/60
	I1212 23:10:15.753826  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 17/60
	I1212 23:10:16.755332  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 18/60
	I1212 23:10:17.756911  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 19/60
	I1212 23:10:18.759257  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 20/60
	I1212 23:10:19.760654  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 21/60
	I1212 23:10:20.762150  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 22/60
	I1212 23:10:21.763869  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 23/60
	I1212 23:10:22.765728  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 24/60
	I1212 23:10:23.767779  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 25/60
	I1212 23:10:24.769172  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 26/60
	I1212 23:10:25.770834  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 27/60
	I1212 23:10:26.772156  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 28/60
	I1212 23:10:27.773676  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 29/60
	I1212 23:10:28.775707  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 30/60
	I1212 23:10:29.777057  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 31/60
	I1212 23:10:30.778730  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 32/60
	I1212 23:10:31.780082  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 33/60
	I1212 23:10:32.781666  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 34/60
	I1212 23:10:33.783760  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 35/60
	I1212 23:10:34.785050  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 36/60
	I1212 23:10:35.786385  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 37/60
	I1212 23:10:36.787884  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 38/60
	I1212 23:10:37.789148  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 39/60
	I1212 23:10:38.791505  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 40/60
	I1212 23:10:39.792829  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 41/60
	I1212 23:10:40.794068  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 42/60
	I1212 23:10:41.795829  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 43/60
	I1212 23:10:42.797368  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 44/60
	I1212 23:10:43.799500  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 45/60
	I1212 23:10:44.801204  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 46/60
	I1212 23:10:45.802584  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 47/60
	I1212 23:10:46.804233  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 48/60
	I1212 23:10:47.806469  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 49/60
	I1212 23:10:48.807835  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 50/60
	I1212 23:10:49.809304  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 51/60
	I1212 23:10:50.810605  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 52/60
	I1212 23:10:51.812023  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 53/60
	I1212 23:10:52.813621  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 54/60
	I1212 23:10:53.815746  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 55/60
	I1212 23:10:54.817825  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 56/60
	I1212 23:10:55.819328  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 57/60
	I1212 23:10:56.820933  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 58/60
	I1212 23:10:57.823330  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 59/60
	I1212 23:10:58.824775  127254 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:10:58.824860  127254 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:10:58.824886  127254 retry.go:31] will retry after 1.013317438s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:10:59.839044  127254 stop.go:39] StopHost: default-k8s-diff-port-850839
	I1212 23:10:59.839471  127254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:10:59.839515  127254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:10:59.854183  127254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I1212 23:10:59.854633  127254 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:10:59.855097  127254 main.go:141] libmachine: Using API Version  1
	I1212 23:10:59.855125  127254 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:10:59.855435  127254 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:10:59.857268  127254 out.go:177] * Stopping node "default-k8s-diff-port-850839"  ...
	I1212 23:10:59.858749  127254 main.go:141] libmachine: Stopping "default-k8s-diff-port-850839"...
	I1212 23:10:59.858761  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:10:59.860204  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Stop
	I1212 23:10:59.863188  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 0/60
	I1212 23:11:00.864821  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 1/60
	I1212 23:11:01.866107  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 2/60
	I1212 23:11:02.867706  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 3/60
	I1212 23:11:03.869135  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 4/60
	I1212 23:11:04.870941  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 5/60
	I1212 23:11:05.872381  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 6/60
	I1212 23:11:06.873864  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 7/60
	I1212 23:11:07.875326  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 8/60
	I1212 23:11:08.876548  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 9/60
	I1212 23:11:09.878637  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 10/60
	I1212 23:11:10.880006  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 11/60
	I1212 23:11:11.881361  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 12/60
	I1212 23:11:12.882720  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 13/60
	I1212 23:11:13.884162  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 14/60
	I1212 23:11:14.886105  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 15/60
	I1212 23:11:15.887555  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 16/60
	I1212 23:11:16.889732  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 17/60
	I1212 23:11:17.891186  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 18/60
	I1212 23:11:18.892805  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 19/60
	I1212 23:11:19.894701  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 20/60
	I1212 23:11:20.896516  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 21/60
	I1212 23:11:21.897740  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 22/60
	I1212 23:11:22.899203  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 23/60
	I1212 23:11:23.900700  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 24/60
	I1212 23:11:24.901968  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 25/60
	I1212 23:11:25.903401  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 26/60
	I1212 23:11:26.904654  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 27/60
	I1212 23:11:27.906129  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 28/60
	I1212 23:11:28.907645  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 29/60
	I1212 23:11:29.909418  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 30/60
	I1212 23:11:30.911701  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 31/60
	I1212 23:11:31.912969  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 32/60
	I1212 23:11:32.914244  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 33/60
	I1212 23:11:33.915676  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 34/60
	I1212 23:11:34.917613  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 35/60
	I1212 23:11:35.918980  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 36/60
	I1212 23:11:36.920385  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 37/60
	I1212 23:11:37.921798  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 38/60
	I1212 23:11:38.923323  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 39/60
	I1212 23:11:39.924992  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 40/60
	I1212 23:11:40.926441  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 41/60
	I1212 23:11:41.927721  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 42/60
	I1212 23:11:42.929152  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 43/60
	I1212 23:11:43.930409  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 44/60
	I1212 23:11:44.932187  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 45/60
	I1212 23:11:45.933484  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 46/60
	I1212 23:11:46.934869  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 47/60
	I1212 23:11:47.936193  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 48/60
	I1212 23:11:48.937681  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 49/60
	I1212 23:11:49.939485  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 50/60
	I1212 23:11:50.940839  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 51/60
	I1212 23:11:51.942280  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 52/60
	I1212 23:11:52.943739  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 53/60
	I1212 23:11:53.945704  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 54/60
	I1212 23:11:54.947624  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 55/60
	I1212 23:11:55.948951  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 56/60
	I1212 23:11:56.950281  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 57/60
	I1212 23:11:57.951732  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 58/60
	I1212 23:11:58.953068  127254 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for machine to stop 59/60
	I1212 23:11:59.954058  127254 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:11:59.954107  127254 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:11:59.956451  127254 out.go:177] 
	W1212 23:11:59.958157  127254 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 23:11:59.958175  127254 out.go:239] * 
	* 
	W1212 23:11:59.961599  127254 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:11:59.963123  127254 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-850839 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
E1212 23:12:03.307937   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839: exit status 3 (18.565992285s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:12:18.531561  127957 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host
	E1212 23:12:18.531582  127957 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850839" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120
E1212 23:11:30.610156   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:11:32.734116   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120: exit status 3 (3.19920938s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:11:33.603590  127624 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E1212 23:11:33.603614  127624 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-809120 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1212 23:11:39.569107   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-809120 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153669426s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-809120 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120: exit status 3 (3.062605836s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:11:42.819623  127694 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E1212 23:11:42.819648  127694 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-809120" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640: exit status 3 (3.200505636s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:11:43.331636  127724 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.146:22: connect: no route to host
	E1212 23:11:43.331716  127724 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.146:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-549640 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1212 23:11:45.406062   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-549640 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153508193s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.146:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-549640 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640
E1212 23:11:51.091330   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640: exit status 3 (3.062302447s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:11:52.547749  127870 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.146:22: connect: no route to host
	E1212 23:11:52.547777  127870 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.146:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-549640" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023
E1212 23:12:09.617227   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:09.622570   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:09.632860   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:09.653170   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:09.693491   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:09.773890   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:09.934346   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023: exit status 3 (3.199405329s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:12:10.211634  127997 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host
	E1212 23:12:10.211663  127997 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-115023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1212 23:12:10.255150   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:10.896205   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:12.177314   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:12:13.548497   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:12:13.694812   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:12:14.737962   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-115023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155223923s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-115023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023: exit status 3 (3.060802175s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:12:19.427668  128086 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host
	E1212 23:12:19.427704  128086 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-115023" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839: exit status 3 (3.200186006s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:12:21.731653  128116 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host
	E1212 23:12:21.731676  128116 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-850839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-850839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153277487s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-850839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
E1212 23:12:30.099650   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839: exit status 3 (3.06242047s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:12:30.947641  128241 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host
	E1212 23:12:30.947672  128241 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.180:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850839" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 23:18:13.361724   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:18:41.045726   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:19:01.564910   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:19:17.803693   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:20:02.202974   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:20:25.171830   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 23:20:51.770840   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:21:10.129323   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:21:39.568513   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:21:48.218817   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 23:21:53.067844   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:22:09.616662   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-549640 -n old-k8s-version-549640
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-12 23:26:57.51584656 +0000 UTC m=+5057.326904020
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-549640 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-549640 logs -n 25: (1.641116578s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-828988 sudo cat                              | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo find                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo crio                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-828988                                       | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-685244 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | disable-driver-mounts-685244                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:09 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-809120            | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-549640        | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-115023             | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850839  | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-809120                 | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-549640             | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-115023                  | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850839       | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:22 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:12:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:12:31.006246  128282 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:12:31.006380  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006389  128282 out.go:309] Setting ErrFile to fd 2...
	I1212 23:12:31.006393  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006549  128282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:12:31.007106  128282 out.go:303] Setting JSON to false
	I1212 23:12:31.008035  128282 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14105,"bootTime":1702408646,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:12:31.008097  128282 start.go:138] virtualization: kvm guest
	I1212 23:12:31.010317  128282 out.go:177] * [default-k8s-diff-port-850839] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:12:31.011782  128282 notify.go:220] Checking for updates...
	I1212 23:12:31.011787  128282 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:12:31.013177  128282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:12:31.014626  128282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:12:31.016153  128282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:12:31.017420  128282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:12:31.018789  128282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:12:31.020548  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:12:31.021022  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.021073  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.036337  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I1212 23:12:31.036724  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.037285  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.037315  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.037677  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.037910  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.038190  128282 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:12:31.038482  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.038521  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.052455  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
	I1212 23:12:31.052897  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.053408  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.053428  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.053842  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.054041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.090916  128282 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:12:31.092159  128282 start.go:298] selected driver: kvm2
	I1212 23:12:31.092174  128282 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.092313  128282 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:12:31.092991  128282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.093081  128282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:12:31.108612  128282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:12:31.108979  128282 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:12:31.109050  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:12:31.109064  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:12:31.109078  128282 start_flags.go:323] config:
	{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85083
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.109261  128282 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.110991  128282 out.go:177] * Starting control plane node default-k8s-diff-port-850839 in cluster default-k8s-diff-port-850839
	I1212 23:12:28.611488  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:31.112184  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:12:31.112223  128282 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:12:31.112231  128282 cache.go:56] Caching tarball of preloaded images
	I1212 23:12:31.112315  128282 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:12:31.112331  128282 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:12:31.112435  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:12:31.112621  128282 start.go:365] acquiring machines lock for default-k8s-diff-port-850839: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:12:34.691505  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:37.763538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:43.843515  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:46.915553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:52.995487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:56.067468  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:02.147575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:05.219586  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:11.299553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:14.371547  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:20.451538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:23.523565  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:29.603544  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:32.675516  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:38.755580  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:41.827595  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:47.907601  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:50.979707  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:57.059532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:00.131511  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:06.211489  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:09.283534  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:15.363535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:18.435583  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:24.515478  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:27.587546  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:33.667567  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:36.739532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:42.819531  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:45.891616  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:51.971509  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:55.043560  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:01.123510  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:04.195575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:10.275535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:13.347520  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:19.427542  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:22.499524  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:28.579575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:31.651552  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:37.731535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:40.803533  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:46.883561  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:49.955571  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:56.035557  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:59.107536  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:05.187487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:08.259527  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:14.339497  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:17.411598  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:20.416121  127900 start.go:369] acquired machines lock for "old-k8s-version-549640" in 4m27.702597236s
	I1212 23:16:20.416185  127900 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:20.416197  127900 fix.go:54] fixHost starting: 
	I1212 23:16:20.416598  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:20.416638  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:20.431626  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I1212 23:16:20.432088  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:20.432550  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:16:20.432573  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:20.432976  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:20.433174  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:20.433352  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:16:20.435450  127900 fix.go:102] recreateIfNeeded on old-k8s-version-549640: state=Stopped err=<nil>
	I1212 23:16:20.435477  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	W1212 23:16:20.435650  127900 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:20.437467  127900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-549640" ...
	I1212 23:16:20.438890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Start
	I1212 23:16:20.439060  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring networks are active...
	I1212 23:16:20.439992  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network default is active
	I1212 23:16:20.440387  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network mk-old-k8s-version-549640 is active
	I1212 23:16:20.440738  127900 main.go:141] libmachine: (old-k8s-version-549640) Getting domain xml...
	I1212 23:16:20.441435  127900 main.go:141] libmachine: (old-k8s-version-549640) Creating domain...
	I1212 23:16:21.692826  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting to get IP...
	I1212 23:16:21.693784  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.694269  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.694313  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.694229  128878 retry.go:31] will retry after 250.302126ms: waiting for machine to come up
	I1212 23:16:21.945651  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.946122  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.946145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.946067  128878 retry.go:31] will retry after 271.460868ms: waiting for machine to come up
	I1212 23:16:22.219848  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.220326  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.220352  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.220248  128878 retry.go:31] will retry after 466.723624ms: waiting for machine to come up
	I1212 23:16:20.413611  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:20.413648  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:16:20.415967  127760 machine.go:91] provisioned docker machine in 4m37.407647774s
	I1212 23:16:20.416013  127760 fix.go:56] fixHost completed within 4m37.429684827s
	I1212 23:16:20.416025  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 4m37.429713708s
	W1212 23:16:20.416055  127760 start.go:694] error starting host: provision: host is not running
	W1212 23:16:20.416230  127760 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 23:16:20.416241  127760 start.go:709] Will try again in 5 seconds ...
	I1212 23:16:22.689020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.689524  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.689559  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.689474  128878 retry.go:31] will retry after 384.986526ms: waiting for machine to come up
	I1212 23:16:23.076020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.076428  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.076462  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.076365  128878 retry.go:31] will retry after 673.784203ms: waiting for machine to come up
	I1212 23:16:23.752374  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.752825  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.752859  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.752777  128878 retry.go:31] will retry after 744.371791ms: waiting for machine to come up
	I1212 23:16:24.498624  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:24.499057  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:24.499088  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:24.498994  128878 retry.go:31] will retry after 1.095766265s: waiting for machine to come up
	I1212 23:16:25.596742  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:25.597192  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:25.597217  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:25.597133  128878 retry.go:31] will retry after 1.340596782s: waiting for machine to come up
	I1212 23:16:26.939593  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:26.939933  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:26.939957  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:26.939881  128878 retry.go:31] will retry after 1.546075974s: waiting for machine to come up
	I1212 23:16:25.417922  127760 start.go:365] acquiring machines lock for embed-certs-809120: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:16:28.488543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:28.488923  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:28.488949  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:28.488883  128878 retry.go:31] will retry after 2.06517547s: waiting for machine to come up
	I1212 23:16:30.555809  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:30.556300  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:30.556330  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:30.556262  128878 retry.go:31] will retry after 2.237409729s: waiting for machine to come up
	I1212 23:16:32.796273  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:32.796684  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:32.796712  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:32.796629  128878 retry.go:31] will retry after 3.535954383s: waiting for machine to come up
	I1212 23:16:36.333758  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:36.334211  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:36.334243  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:36.334143  128878 retry.go:31] will retry after 3.820382113s: waiting for machine to come up
	I1212 23:16:41.367963  128156 start.go:369] acquired machines lock for "no-preload-115023" in 4m21.778030837s
	I1212 23:16:41.368034  128156 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:41.368046  128156 fix.go:54] fixHost starting: 
	I1212 23:16:41.368459  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:41.368498  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:41.384557  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1212 23:16:41.385004  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:41.385448  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:16:41.385471  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:41.385799  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:41.386007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:16:41.386192  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:16:41.387807  128156 fix.go:102] recreateIfNeeded on no-preload-115023: state=Stopped err=<nil>
	I1212 23:16:41.387858  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	W1212 23:16:41.388030  128156 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:41.390189  128156 out.go:177] * Restarting existing kvm2 VM for "no-preload-115023" ...
	I1212 23:16:40.159111  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159503  127900 main.go:141] libmachine: (old-k8s-version-549640) Found IP for machine: 192.168.61.146
	I1212 23:16:40.159530  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserving static IP address...
	I1212 23:16:40.159543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has current primary IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159970  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.160042  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | skip adding static IP to network mk-old-k8s-version-549640 - found existing host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"}
	I1212 23:16:40.160060  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserved static IP address: 192.168.61.146
	I1212 23:16:40.160072  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for SSH to be available...
	I1212 23:16:40.160087  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Getting to WaitForSSH function...
	I1212 23:16:40.162048  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162377  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.162417  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162498  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH client type: external
	I1212 23:16:40.162571  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa (-rw-------)
	I1212 23:16:40.162609  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:16:40.162629  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | About to run SSH command:
	I1212 23:16:40.162644  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | exit 0
	I1212 23:16:40.254804  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | SSH cmd err, output: <nil>: 
	I1212 23:16:40.255235  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetConfigRaw
	I1212 23:16:40.255885  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.258196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258526  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.258551  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258806  127900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/config.json ...
	I1212 23:16:40.259036  127900 machine.go:88] provisioning docker machine ...
	I1212 23:16:40.259059  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:40.259292  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259454  127900 buildroot.go:166] provisioning hostname "old-k8s-version-549640"
	I1212 23:16:40.259475  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259624  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.261311  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261561  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.261583  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261686  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.261818  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.261974  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.262114  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.262270  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.262645  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.262666  127900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-549640 && echo "old-k8s-version-549640" | sudo tee /etc/hostname
	I1212 23:16:40.395342  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-549640
	
	I1212 23:16:40.395376  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.398008  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398391  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.398430  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398533  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.398716  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.398890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.399024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.399152  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.399489  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.399510  127900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-549640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-549640/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-549640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:40.526781  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:40.526824  127900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:16:40.526847  127900 buildroot.go:174] setting up certificates
	I1212 23:16:40.526859  127900 provision.go:83] configureAuth start
	I1212 23:16:40.526877  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.527276  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.530483  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.530876  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.530908  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.531162  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.533161  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533456  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.533488  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533567  127900 provision.go:138] copyHostCerts
	I1212 23:16:40.533625  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:16:40.533645  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:16:40.533711  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:16:40.533799  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:16:40.533806  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:16:40.533829  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:16:40.533882  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:16:40.533889  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:16:40.533913  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:16:40.533957  127900 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-549640 san=[192.168.61.146 192.168.61.146 localhost 127.0.0.1 minikube old-k8s-version-549640]
	I1212 23:16:40.630542  127900 provision.go:172] copyRemoteCerts
	I1212 23:16:40.630611  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:40.630639  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.633145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633408  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.633433  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633579  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.633790  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.633944  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.634162  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:40.725498  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 23:16:40.748097  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:16:40.769852  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:16:40.791381  127900 provision.go:86] duration metric: configureAuth took 264.501961ms
	I1212 23:16:40.791417  127900 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:40.791602  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:16:40.791678  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.794113  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794479  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.794514  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794653  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.794864  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795055  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795234  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.795443  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.795777  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.795807  127900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:16:41.103469  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:16:41.103503  127900 machine.go:91] provisioned docker machine in 844.450063ms
	I1212 23:16:41.103517  127900 start.go:300] post-start starting for "old-k8s-version-549640" (driver="kvm2")
	I1212 23:16:41.103527  127900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:41.103547  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.103894  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:41.103923  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.106459  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.106835  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.106864  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.107013  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.107190  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.107363  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.107532  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.201177  127900 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:41.205686  127900 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:41.205711  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:16:41.205773  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:16:41.205862  127900 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:16:41.205970  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:41.214591  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:41.240854  127900 start.go:303] post-start completed in 137.32025ms
	I1212 23:16:41.240885  127900 fix.go:56] fixHost completed within 20.824687398s
	I1212 23:16:41.240915  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.243633  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244071  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.244104  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244300  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.244517  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244651  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244806  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.244981  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:41.245337  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:41.245350  127900 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:16:41.367815  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423001.317394085
	
	I1212 23:16:41.367837  127900 fix.go:206] guest clock: 1702423001.317394085
	I1212 23:16:41.367844  127900 fix.go:219] Guest: 2023-12-12 23:16:41.317394085 +0000 UTC Remote: 2023-12-12 23:16:41.240889292 +0000 UTC m=+288.685284781 (delta=76.504793ms)
	I1212 23:16:41.367863  127900 fix.go:190] guest clock delta is within tolerance: 76.504793ms
	I1212 23:16:41.367868  127900 start.go:83] releasing machines lock for "old-k8s-version-549640", held for 20.951706122s
	I1212 23:16:41.367895  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.368219  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:41.370769  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371172  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.371196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371378  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.371904  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372069  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372157  127900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:16:41.372206  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.372409  127900 ssh_runner.go:195] Run: cat /version.json
	I1212 23:16:41.372438  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.374847  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.374869  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375341  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375373  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375401  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375419  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375526  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375661  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375749  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.375835  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.376026  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376031  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.488636  127900 ssh_runner.go:195] Run: systemctl --version
	I1212 23:16:41.494315  127900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:16:41.645474  127900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:16:41.652912  127900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:16:41.652988  127900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:16:41.667662  127900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:16:41.667680  127900 start.go:475] detecting cgroup driver to use...
	I1212 23:16:41.667747  127900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:16:41.681625  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:16:41.693475  127900 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:16:41.693540  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:16:41.705743  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:16:41.719152  127900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:16:41.819641  127900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:16:41.929543  127900 docker.go:219] disabling docker service ...
	I1212 23:16:41.929617  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:16:41.943407  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:16:41.955372  127900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:16:42.063078  127900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:16:42.177422  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:16:42.192994  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:16:42.211887  127900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 23:16:42.211943  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.223418  127900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:16:42.223486  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.234905  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.245973  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.261016  127900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:16:42.272819  127900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:16:42.283308  127900 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:16:42.283381  127900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:16:42.296365  127900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:16:42.307038  127900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:16:42.412672  127900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:16:42.590363  127900 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:16:42.590470  127900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:16:42.596285  127900 start.go:543] Will wait 60s for crictl version
	I1212 23:16:42.596360  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:42.600633  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:16:42.638709  127900 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:16:42.638811  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.694435  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.750327  127900 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 23:16:41.391501  128156 main.go:141] libmachine: (no-preload-115023) Calling .Start
	I1212 23:16:41.391671  128156 main.go:141] libmachine: (no-preload-115023) Ensuring networks are active...
	I1212 23:16:41.392314  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network default is active
	I1212 23:16:41.392624  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network mk-no-preload-115023 is active
	I1212 23:16:41.393075  128156 main.go:141] libmachine: (no-preload-115023) Getting domain xml...
	I1212 23:16:41.393720  128156 main.go:141] libmachine: (no-preload-115023) Creating domain...
	I1212 23:16:42.669200  128156 main.go:141] libmachine: (no-preload-115023) Waiting to get IP...
	I1212 23:16:42.670068  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.670482  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.670582  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.670462  128998 retry.go:31] will retry after 201.350715ms: waiting for machine to come up
	I1212 23:16:42.874061  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.874543  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.874576  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.874492  128998 retry.go:31] will retry after 331.205906ms: waiting for machine to come up
	I1212 23:16:43.207045  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.207590  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.207618  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.207533  128998 retry.go:31] will retry after 343.139691ms: waiting for machine to come up
	I1212 23:16:43.552253  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.552737  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.552769  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.552683  128998 retry.go:31] will retry after 606.192126ms: waiting for machine to come up
	I1212 23:16:44.160409  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.160877  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.160923  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.160842  128998 retry.go:31] will retry after 713.164162ms: waiting for machine to come up
	I1212 23:16:42.751897  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:42.754490  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.754832  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:42.754867  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.755047  127900 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 23:16:42.759290  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:42.770851  127900 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 23:16:42.770913  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:42.822484  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:42.822559  127900 ssh_runner.go:195] Run: which lz4
	I1212 23:16:42.826907  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:16:42.831601  127900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:16:42.831633  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 23:16:44.643588  127900 crio.go:444] Took 1.816704 seconds to copy over tarball
	I1212 23:16:44.643671  127900 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:16:47.603870  127900 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960150759s)
	I1212 23:16:47.603904  127900 crio.go:451] Took 2.960288 seconds to extract the tarball
	I1212 23:16:47.603918  127900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:16:44.875548  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.875971  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.876003  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.875908  128998 retry.go:31] will retry after 928.762857ms: waiting for machine to come up
	I1212 23:16:45.806556  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:45.806983  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:45.807019  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:45.806932  128998 retry.go:31] will retry after 945.322601ms: waiting for machine to come up
	I1212 23:16:46.754374  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:46.754834  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:46.754869  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:46.754818  128998 retry.go:31] will retry after 1.373584303s: waiting for machine to come up
	I1212 23:16:48.130434  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:48.130917  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:48.130950  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:48.130870  128998 retry.go:31] will retry after 1.683447661s: waiting for machine to come up
	I1212 23:16:47.644193  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:47.696129  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:47.696156  127900 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.696314  127900 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.696273  127900 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.696242  127900 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.696306  127900 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.696371  127900 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.696445  127900 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 23:16:47.697649  127900 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.697713  127900 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.697816  127900 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.697955  127900 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 23:16:47.698013  127900 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.698109  127900 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.698124  127900 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.698341  127900 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.888397  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.897712  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.897790  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.910016  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 23:16:47.911074  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.912891  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.923071  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.995042  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:48.022161  127900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 23:16:48.022215  127900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.022270  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053132  127900 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 23:16:48.053181  127900 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.053236  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053493  127900 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 23:16:48.053531  127900 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.053588  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.123888  127900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 23:16:48.123949  127900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.123889  127900 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 23:16:48.124009  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124022  127900 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 23:16:48.124077  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124089  127900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 23:16:48.124111  127900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 23:16:48.124141  127900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.124171  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124115  127900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.124249  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.205456  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.205488  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.205609  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.205650  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.205702  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 23:16:48.205789  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.205814  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.351665  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 23:16:48.351700  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 23:16:48.360026  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 23:16:48.363255  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 23:16:48.363297  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 23:16:48.363376  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 23:16:48.363413  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 23:16:48.363525  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369271  127900 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 23:16:48.369289  127900 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369326  127900 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 23:16:50.628595  127900 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.259242667s)
	I1212 23:16:50.628629  127900 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 23:16:50.628679  127900 cache_images.go:92] LoadImages completed in 2.932510127s
	W1212 23:16:50.628774  127900 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 23:16:50.628871  127900 ssh_runner.go:195] Run: crio config
	I1212 23:16:50.696623  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:16:50.696645  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:16:50.696665  127900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:16:50.696690  127900 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.146 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-549640 NodeName:old-k8s-version-549640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 23:16:50.696857  127900 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-549640"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-549640
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.146:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:16:50.696950  127900 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-549640 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:16:50.697013  127900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 23:16:50.706222  127900 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:16:50.706309  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:16:50.714679  127900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 23:16:50.732119  127900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:16:50.749596  127900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 23:16:50.766445  127900 ssh_runner.go:195] Run: grep 192.168.61.146	control-plane.minikube.internal$ /etc/hosts
	I1212 23:16:50.770611  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:50.783162  127900 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640 for IP: 192.168.61.146
	I1212 23:16:50.783205  127900 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:16:50.783434  127900 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:16:50.783504  127900 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:16:50.783623  127900 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.key
	I1212 23:16:50.783701  127900 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key.a124ebb4
	I1212 23:16:50.783781  127900 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key
	I1212 23:16:50.784002  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:16:50.784053  127900 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:16:50.784070  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:16:50.784118  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:16:50.784162  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:16:50.784201  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:16:50.784260  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:50.785202  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:16:50.813072  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:16:50.838714  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:16:50.863302  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:16:50.891365  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:16:50.916623  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:16:50.946894  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:16:50.974859  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:16:51.002629  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:16:51.027782  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:16:51.052384  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:16:51.077430  127900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:16:51.094703  127900 ssh_runner.go:195] Run: openssl version
	I1212 23:16:51.100625  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:16:51.111038  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116246  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116342  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.122069  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:16:51.132325  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:16:51.142392  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147278  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147353  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.153446  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:16:51.163491  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:16:51.173393  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178482  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178560  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.184710  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:16:51.194819  127900 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:16:51.199808  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:16:51.206208  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:16:51.212498  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:16:51.218555  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:16:51.224923  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:16:51.231298  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:16:51.237570  127900 kubeadm.go:404] StartCluster: {Name:old-k8s-version-549640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:16:51.237672  127900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:16:51.237752  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:16:51.283890  127900 cri.go:89] found id: ""
	I1212 23:16:51.283985  127900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:16:51.296861  127900 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:16:51.296897  127900 kubeadm.go:636] restartCluster start
	I1212 23:16:51.296990  127900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:16:51.306034  127900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.307730  127900 kubeconfig.go:92] found "old-k8s-version-549640" server: "https://192.168.61.146:8443"
	I1212 23:16:51.311721  127900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:16:51.320683  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.320831  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.332122  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.332145  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.332197  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.342755  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.843464  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.843575  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.854933  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:52.343493  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.343579  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.354884  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:49.816605  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:49.816934  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:49.816968  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:49.816881  128998 retry.go:31] will retry after 1.775884699s: waiting for machine to come up
	I1212 23:16:51.594388  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:51.594915  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:51.594952  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:51.594866  128998 retry.go:31] will retry after 1.948886075s: waiting for machine to come up
	I1212 23:16:53.546035  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:53.546503  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:53.546538  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:53.546441  128998 retry.go:31] will retry after 3.530621748s: waiting for machine to come up
	I1212 23:16:52.842987  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.843085  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.854637  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.343155  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.343261  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.354960  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.843482  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.843555  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.854488  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.342926  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.343028  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.357489  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.843024  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.843111  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.854764  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.343252  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.343363  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.354798  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.843831  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.843931  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.855077  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.343753  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.343827  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.354659  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.843304  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.843423  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.854727  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.343292  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.343428  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.354360  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.078854  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:57.079265  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:57.079287  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:57.079224  128998 retry.go:31] will retry after 3.552473985s: waiting for machine to come up
	I1212 23:17:01.924642  128282 start.go:369] acquired machines lock for "default-k8s-diff-port-850839" in 4m30.811975302s
	I1212 23:17:01.924716  128282 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:01.924725  128282 fix.go:54] fixHost starting: 
	I1212 23:17:01.925164  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:01.925207  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:01.942895  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I1212 23:17:01.943340  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:01.943906  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:01.943938  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:01.944371  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:01.944594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:01.944819  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:01.946719  128282 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850839: state=Stopped err=<nil>
	I1212 23:17:01.946759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	W1212 23:17:01.946947  128282 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:01.949597  128282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850839" ...
	I1212 23:16:57.843410  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.843484  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.854821  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.343379  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.343470  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.354868  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.843473  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.843594  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.854752  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.343324  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.343432  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.354442  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.842979  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.843086  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.854537  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.343125  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.343201  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.354401  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.843565  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.843642  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.854663  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:01.321433  127900 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:01.321466  127900 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:01.321477  127900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:01.321534  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:01.361643  127900 cri.go:89] found id: ""
	I1212 23:17:01.361739  127900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:01.380002  127900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:01.388875  127900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:01.388944  127900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397644  127900 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397690  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:01.528111  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:00.635998  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636444  128156 main.go:141] libmachine: (no-preload-115023) Found IP for machine: 192.168.72.32
	I1212 23:17:00.636462  128156 main.go:141] libmachine: (no-preload-115023) Reserving static IP address...
	I1212 23:17:00.636478  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has current primary IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.636925  128156 main.go:141] libmachine: (no-preload-115023) DBG | skip adding static IP to network mk-no-preload-115023 - found existing host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"}
	I1212 23:17:00.636939  128156 main.go:141] libmachine: (no-preload-115023) Reserved static IP address: 192.168.72.32
	I1212 23:17:00.636961  128156 main.go:141] libmachine: (no-preload-115023) Waiting for SSH to be available...
	I1212 23:17:00.636971  128156 main.go:141] libmachine: (no-preload-115023) DBG | Getting to WaitForSSH function...
	I1212 23:17:00.639074  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639400  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.639443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639546  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH client type: external
	I1212 23:17:00.639586  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa (-rw-------)
	I1212 23:17:00.639629  128156 main.go:141] libmachine: (no-preload-115023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:00.639644  128156 main.go:141] libmachine: (no-preload-115023) DBG | About to run SSH command:
	I1212 23:17:00.639663  128156 main.go:141] libmachine: (no-preload-115023) DBG | exit 0
	I1212 23:17:00.734735  128156 main.go:141] libmachine: (no-preload-115023) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:00.735132  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetConfigRaw
	I1212 23:17:00.735813  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:00.738429  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.738828  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.738871  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.739049  128156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/config.json ...
	I1212 23:17:00.739276  128156 machine.go:88] provisioning docker machine ...
	I1212 23:17:00.739299  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:00.739537  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739695  128156 buildroot.go:166] provisioning hostname "no-preload-115023"
	I1212 23:17:00.739717  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739879  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.742096  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742404  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.742443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742591  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.742756  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.742925  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.743067  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.743224  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.743733  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.743751  128156 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-115023 && echo "no-preload-115023" | sudo tee /etc/hostname
	I1212 23:17:00.888573  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-115023
	
	I1212 23:17:00.888610  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.891302  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891619  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.891664  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891852  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.892092  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892263  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892419  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.892584  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.892911  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.892930  128156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-115023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-115023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-115023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:01.032180  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:01.032222  128156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:01.032257  128156 buildroot.go:174] setting up certificates
	I1212 23:17:01.032273  128156 provision.go:83] configureAuth start
	I1212 23:17:01.032291  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:01.032653  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.035024  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035334  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.035361  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035494  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.037594  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.037898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.037930  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.038066  128156 provision.go:138] copyHostCerts
	I1212 23:17:01.038122  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:01.038143  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:01.038202  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:01.038322  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:01.038334  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:01.038355  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:01.038470  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:01.038481  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:01.038499  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:01.038575  128156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.no-preload-115023 san=[192.168.72.32 192.168.72.32 localhost 127.0.0.1 minikube no-preload-115023]
	I1212 23:17:01.146965  128156 provision.go:172] copyRemoteCerts
	I1212 23:17:01.147027  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:01.147053  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.149326  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149621  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.149656  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149774  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.149969  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.150118  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.150238  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.244271  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:01.267206  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 23:17:01.289286  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:01.311940  128156 provision.go:86] duration metric: configureAuth took 279.648376ms
	I1212 23:17:01.311970  128156 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:01.312144  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:17:01.312229  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.314543  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.314881  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.314907  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.315055  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.315281  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315469  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315658  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.315821  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.316162  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.316185  128156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:01.644687  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:01.644737  128156 machine.go:91] provisioned docker machine in 905.44182ms
	I1212 23:17:01.644750  128156 start.go:300] post-start starting for "no-preload-115023" (driver="kvm2")
	I1212 23:17:01.644764  128156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:01.644781  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.645148  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:01.645186  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.647976  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648333  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.648369  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648572  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.648769  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.648972  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.649102  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.746191  128156 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:01.750374  128156 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:01.750416  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:01.750499  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:01.750605  128156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:01.750721  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:01.760389  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:01.788014  128156 start.go:303] post-start completed in 143.244652ms
	I1212 23:17:01.788052  128156 fix.go:56] fixHost completed within 20.420006869s
	I1212 23:17:01.788083  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.790868  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791357  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.791392  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791675  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.791911  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792276  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.792463  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.792889  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.792903  128156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:01.924437  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423021.865464875
	
	I1212 23:17:01.924464  128156 fix.go:206] guest clock: 1702423021.865464875
	I1212 23:17:01.924477  128156 fix.go:219] Guest: 2023-12-12 23:17:01.865464875 +0000 UTC Remote: 2023-12-12 23:17:01.788058057 +0000 UTC m=+282.352654726 (delta=77.406818ms)
	I1212 23:17:01.924532  128156 fix.go:190] guest clock delta is within tolerance: 77.406818ms
	I1212 23:17:01.924542  128156 start.go:83] releasing machines lock for "no-preload-115023", held for 20.556534447s
	I1212 23:17:01.924581  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.924871  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.927873  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928206  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.928238  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928450  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929098  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929301  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929387  128156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:01.929448  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.929516  128156 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:01.929559  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.932560  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.932593  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933001  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933031  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933059  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933081  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933340  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933430  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933547  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933659  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933919  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.933923  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.934097  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.934170  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:02.029559  128156 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:02.056382  128156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:02.199375  128156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:02.207131  128156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:02.207208  128156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:02.227083  128156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:02.227111  128156 start.go:475] detecting cgroup driver to use...
	I1212 23:17:02.227174  128156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:02.241611  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:02.253610  128156 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:02.253675  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:02.266973  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:02.280712  128156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:02.406583  128156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:02.548155  128156 docker.go:219] disabling docker service ...
	I1212 23:17:02.548235  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:02.563410  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:02.575968  128156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:02.697146  128156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:02.828963  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:02.842559  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:02.865357  128156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:02.865433  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.878154  128156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:02.878231  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.892188  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.903286  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.915201  128156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:02.927665  128156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:02.938466  128156 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:02.938538  128156 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:02.954428  128156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:02.966197  128156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:03.109663  128156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:03.322982  128156 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:03.323068  128156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:03.329800  128156 start.go:543] Will wait 60s for crictl version
	I1212 23:17:03.329866  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.335779  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:03.385099  128156 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:03.385190  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.438085  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.482280  128156 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 23:17:03.483965  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:03.487086  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487464  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:03.487495  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487694  128156 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:03.492027  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:03.506463  128156 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:17:03.506503  128156 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:03.544301  128156 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 23:17:03.544329  128156 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:17:03.544386  128156 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.544441  128156 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.544474  128156 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.544440  128156 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.544509  128156 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.544527  128156 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 23:17:03.545656  128156 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.545678  128156 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.545726  128156 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.545657  128156 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.545747  128156 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.545758  128156 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.545662  128156 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 23:17:03.546098  128156 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.724978  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.727403  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.739085  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 23:17:03.747535  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.748286  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.780484  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.826808  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.834529  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.840840  128156 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 23:17:03.840893  128156 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.840940  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.868056  128156 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 23:17:03.868106  128156 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.868157  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.043948  128156 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 23:17:04.044014  128156 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.044063  128156 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 23:17:04.044102  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044167  128156 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 23:17:04.044207  128156 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.044252  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044103  128156 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.044334  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044375  128156 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 23:17:04.044401  128156 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.044444  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:04.044446  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044489  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:04.044401  128156 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 23:17:04.044520  128156 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.044545  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.065308  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.065326  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.065380  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.065495  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.065541  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.167939  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.168062  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.207196  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.207344  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.261679  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 23:17:04.261767  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:04.293250  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 23:17:04.293382  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:04.298843  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.298927  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.298960  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.299043  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.299066  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 23:17:04.299125  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:04.299187  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299201  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.299219  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299272  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.302178  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 23:17:04.302502  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 23:17:04.311377  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 23:17:04.311421  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 23:17:01.950988  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Start
	I1212 23:17:01.951206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring networks are active...
	I1212 23:17:01.952109  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network default is active
	I1212 23:17:01.952459  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network mk-default-k8s-diff-port-850839 is active
	I1212 23:17:01.953041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Getting domain xml...
	I1212 23:17:01.953769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Creating domain...
	I1212 23:17:03.377195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting to get IP...
	I1212 23:17:03.378157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378619  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378696  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.378589  129129 retry.go:31] will retry after 235.08446ms: waiting for machine to come up
	I1212 23:17:03.614763  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615258  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615288  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.615169  129129 retry.go:31] will retry after 349.415903ms: waiting for machine to come up
	I1212 23:17:03.965990  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966570  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966670  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.966628  129129 retry.go:31] will retry after 318.332956ms: waiting for machine to come up
	I1212 23:17:04.286225  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286728  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.286676  129129 retry.go:31] will retry after 554.258457ms: waiting for machine to come up
	I1212 23:17:04.843362  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843928  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843975  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.843882  129129 retry.go:31] will retry after 539.399246ms: waiting for machine to come up
	I1212 23:17:05.384807  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385237  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385267  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:05.385213  129129 retry.go:31] will retry after 793.160743ms: waiting for machine to come up
	I1212 23:17:02.653275  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125123388s)
	I1212 23:17:02.653305  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:02.888884  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.005743  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.124339  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:03.124427  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.154719  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.679193  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.179381  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.678654  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.701429  127900 api_server.go:72] duration metric: took 1.577102613s to wait for apiserver process to appear ...
	I1212 23:17:04.701456  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:04.701476  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:06.586652  128156 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.287578103s)
	I1212 23:17:06.586693  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 23:17:06.586710  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.28741029s)
	I1212 23:17:06.586731  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 23:17:06.586768  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:06.586859  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:09.053122  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.466228622s)
	I1212 23:17:09.053156  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 23:17:09.053187  128156 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:09.053239  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:06.180206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180792  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180826  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:06.180767  129129 retry.go:31] will retry after 1.183884482s: waiting for machine to come up
	I1212 23:17:07.365977  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366537  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:07.366465  129129 retry.go:31] will retry after 1.171346567s: waiting for machine to come up
	I1212 23:17:08.539985  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540457  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540493  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:08.540397  129129 retry.go:31] will retry after 1.176896883s: waiting for machine to come up
	I1212 23:17:09.718657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719110  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719142  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:09.719045  129129 retry.go:31] will retry after 2.075378734s: waiting for machine to come up
	I1212 23:17:09.703531  127900 api_server.go:269] stopped: https://192.168.61.146:8443/healthz: Get "https://192.168.61.146:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 23:17:09.703600  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:10.880325  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:10.880391  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:11.380886  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.408357  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.408420  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:11.880867  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.888735  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.888783  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:12.381393  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:12.390271  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:12.399780  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:12.399818  127900 api_server.go:131] duration metric: took 7.698353874s to wait for apiserver health ...
	I1212 23:17:12.399832  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:17:12.399842  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:12.401614  127900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:12.403088  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:12.416722  127900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:12.439451  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:12.452826  127900 system_pods.go:59] 7 kube-system pods found
	I1212 23:17:12.452870  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:12.452879  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:12.452886  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:12.452893  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Pending
	I1212 23:17:12.452901  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:12.452907  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:12.452914  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:12.452924  127900 system_pods.go:74] duration metric: took 13.446573ms to wait for pod list to return data ...
	I1212 23:17:12.452937  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:12.459638  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:12.459679  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:12.459697  127900 node_conditions.go:105] duration metric: took 6.754094ms to run NodePressure ...
	I1212 23:17:12.459722  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:12.767529  127900 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775696  127900 kubeadm.go:787] kubelet initialised
	I1212 23:17:12.775720  127900 kubeadm.go:788] duration metric: took 8.16519ms waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775730  127900 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:12.781477  127900 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.789136  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789163  127900 pod_ready.go:81] duration metric: took 7.661481ms waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.789174  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789183  127900 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.794618  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794658  127900 pod_ready.go:81] duration metric: took 5.45869ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.794671  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794689  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.801021  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801052  127900 pod_ready.go:81] duration metric: took 6.346779ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.801065  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801074  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.845211  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845243  127900 pod_ready.go:81] duration metric: took 44.152184ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.845256  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845263  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.244325  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244373  127900 pod_ready.go:81] duration metric: took 399.10083ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.244387  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244403  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.644414  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644512  127900 pod_ready.go:81] duration metric: took 400.062676ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.644545  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644566  127900 pod_ready.go:38] duration metric: took 868.822745ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:13.644601  127900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:13.674724  127900 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:13.674813  127900 kubeadm.go:640] restartCluster took 22.377904832s
	I1212 23:17:13.674838  127900 kubeadm.go:406] StartCluster complete in 22.437279451s
	I1212 23:17:13.674872  127900 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.674959  127900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:13.677846  127900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.680423  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:13.680690  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:17:13.680746  127900 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:13.680815  127900 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-549640"
	I1212 23:17:13.680839  127900 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-549640"
	W1212 23:17:13.680850  127900 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:13.680938  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.681342  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.681377  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.681658  127900 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-549640"
	I1212 23:17:13.681702  127900 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-549640"
	W1212 23:17:13.681711  127900 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:13.681780  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.682200  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.682237  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.682462  127900 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-549640"
	I1212 23:17:13.682544  127900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-549640"
	I1212 23:17:13.683062  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.683126  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.702138  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1212 23:17:13.702380  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I1212 23:17:13.702684  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702944  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702956  127900 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-549640" context rescaled to 1 replicas
	I1212 23:17:13.702990  127900 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:13.704074  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.704211  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.706640  127900 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:13.708293  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:13.706664  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706671  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706806  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I1212 23:17:13.709240  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.709383  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.709441  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.709852  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.709874  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.710209  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.710818  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.710867  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.711123  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.711765  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.711842  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.717964  127900 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-549640"
	W1212 23:17:13.717989  127900 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:13.718020  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.718447  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.718493  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.738529  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1212 23:17:13.739214  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.739827  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.739854  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.740246  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.740847  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.740917  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.747710  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I1212 23:17:13.748150  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.748772  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.748793  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.749177  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.749348  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.749413  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I1212 23:17:13.750144  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.751385  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.753201  127900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:13.754814  127900 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:13.754827  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:13.754840  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.754702  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.754893  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.756310  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.756707  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.758906  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.758937  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.758961  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.760001  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.760051  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.760288  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.763360  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.763607  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.770081  127900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:10.003107  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 23:17:10.003162  128156 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:10.003218  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:12.288548  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.285296733s)
	I1212 23:17:12.288591  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 23:17:12.288623  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:12.288674  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:13.771543  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:13.771565  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:13.769624  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I1212 23:17:13.771589  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.772282  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.772841  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.772898  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.773284  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.773451  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.775327  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.775699  127900 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:13.775713  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:13.775738  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.779093  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779539  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.779563  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779784  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.779957  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.780110  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.780255  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.787297  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.787663  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.787729  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.788010  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.789645  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.789826  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.790032  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.956110  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:13.956139  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:13.974813  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:14.024369  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:14.045961  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:14.045998  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:14.133161  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.133192  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:14.342486  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.827118  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.146649731s)
	I1212 23:17:14.827249  127900 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:14.827300  127900 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.118984074s)
	I1212 23:17:14.827324  127900 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:15.050916  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.076057269s)
	I1212 23:17:15.051030  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051049  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.051444  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.051497  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.051508  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.051517  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051527  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.053501  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.053573  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.053586  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.229413  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.229504  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.229934  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.231467  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.231489  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.522482  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.49806272s)
	I1212 23:17:15.522554  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.522574  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.522920  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.522971  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.522989  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.523009  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.523024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.523301  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.523322  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558083  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.21554598s)
	I1212 23:17:15.558173  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558200  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.558568  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.558591  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558603  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558613  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.559348  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.559370  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.559364  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.559387  127900 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-549640"
	I1212 23:17:15.562044  127900 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 23:17:11.796385  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796896  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796930  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:11.796831  129129 retry.go:31] will retry after 2.569081306s: waiting for machine to come up
	I1212 23:17:14.369090  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:14.369522  129129 retry.go:31] will retry after 3.566691604s: waiting for machine to come up
	I1212 23:17:15.563724  127900 addons.go:502] enable addons completed in 1.882971652s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 23:17:17.065214  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:15.574585  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.285870336s)
	I1212 23:17:15.574622  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 23:17:15.574667  128156 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:15.574736  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:17.937618  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938021  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938052  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:17.937984  129129 retry.go:31] will retry after 2.790781234s: waiting for machine to come up
	I1212 23:17:20.730659  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731151  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731179  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:20.731128  129129 retry.go:31] will retry after 5.345575973s: waiting for machine to come up
	I1212 23:17:19.564344  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:21.564330  127900 node_ready.go:49] node "old-k8s-version-549640" has status "Ready":"True"
	I1212 23:17:21.564356  127900 node_ready.go:38] duration metric: took 6.737022414s waiting for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:21.564367  127900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:21.569573  127900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:19.606668  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.031891087s)
	I1212 23:17:19.606701  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 23:17:19.606731  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:19.606791  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:21.765860  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.159035751s)
	I1212 23:17:21.765896  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 23:17:21.765934  128156 cache_images.go:123] Successfully loaded all cached images
	I1212 23:17:21.765944  128156 cache_images.go:92] LoadImages completed in 18.221602939s
	I1212 23:17:21.766033  128156 ssh_runner.go:195] Run: crio config
	I1212 23:17:21.818966  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:21.818996  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:21.819021  128156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:21.819048  128156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-115023 NodeName:no-preload-115023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:21.819220  128156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-115023"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:21.819310  128156 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-115023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:21.819369  128156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 23:17:21.829605  128156 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:21.829690  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:21.838518  128156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1212 23:17:21.854214  128156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 23:17:21.869927  128156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1212 23:17:21.886723  128156 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:21.890481  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:21.902964  128156 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023 for IP: 192.168.72.32
	I1212 23:17:21.902993  128156 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:21.903156  128156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:21.903194  128156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:21.903275  128156 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.key
	I1212 23:17:21.903357  128156 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key.9d394d40
	I1212 23:17:21.903393  128156 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key
	I1212 23:17:21.903509  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:21.903540  128156 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:21.903550  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:21.903583  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:21.903623  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:21.903647  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:21.903687  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:21.904310  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:21.928095  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:17:21.951412  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:21.974936  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:21.997877  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:22.020598  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:22.042859  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:22.065941  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:22.088688  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:22.110493  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:22.132736  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:22.154394  128156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:22.170427  128156 ssh_runner.go:195] Run: openssl version
	I1212 23:17:22.176106  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:22.186617  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191355  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191423  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.196989  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:22.208456  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:22.219355  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224154  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224224  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.230069  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:22.240929  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:22.251836  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256441  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256496  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.261952  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:22.272452  128156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:22.277105  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:22.283114  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:22.288860  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:22.294416  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:22.300148  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:22.306380  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:22.316419  128156 kubeadm.go:404] StartCluster: {Name:no-preload-115023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:22.316550  128156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:22.316623  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:22.358616  128156 cri.go:89] found id: ""
	I1212 23:17:22.358703  128156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:22.368800  128156 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:22.368823  128156 kubeadm.go:636] restartCluster start
	I1212 23:17:22.368883  128156 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:22.378570  128156 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.380161  128156 kubeconfig.go:92] found "no-preload-115023" server: "https://192.168.72.32:8443"
	I1212 23:17:22.383451  128156 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:22.392995  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.393064  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.405318  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.405337  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.405370  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.416721  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.917468  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.917571  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.929995  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.417616  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.417752  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.430907  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.917522  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.917607  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.929655  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:24.417316  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.417427  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.429590  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.436348  127760 start.go:369] acquired machines lock for "embed-certs-809120" in 1m2.018372087s
	I1212 23:17:27.436407  127760 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:27.436418  127760 fix.go:54] fixHost starting: 
	I1212 23:17:27.436818  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:27.436856  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:27.453079  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I1212 23:17:27.453449  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:27.453967  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:17:27.453999  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:27.454365  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:27.454580  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:27.454743  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:17:27.456367  127760 fix.go:102] recreateIfNeeded on embed-certs-809120: state=Stopped err=<nil>
	I1212 23:17:27.456395  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	W1212 23:17:27.456549  127760 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:27.458402  127760 out.go:177] * Restarting existing kvm2 VM for "embed-certs-809120" ...
	I1212 23:17:23.588762  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:26.087305  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:27.459818  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Start
	I1212 23:17:27.459994  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring networks are active...
	I1212 23:17:27.460587  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network default is active
	I1212 23:17:27.460997  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network mk-embed-certs-809120 is active
	I1212 23:17:27.461361  127760 main.go:141] libmachine: (embed-certs-809120) Getting domain xml...
	I1212 23:17:27.462026  127760 main.go:141] libmachine: (embed-certs-809120) Creating domain...
	I1212 23:17:26.081099  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Found IP for machine: 192.168.39.180
	I1212 23:17:26.081626  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has current primary IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081637  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserving static IP address...
	I1212 23:17:26.082029  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserved static IP address: 192.168.39.180
	I1212 23:17:26.082080  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.082119  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for SSH to be available...
	I1212 23:17:26.082157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | skip adding static IP to network mk-default-k8s-diff-port-850839 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"}
	I1212 23:17:26.082182  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Getting to WaitForSSH function...
	I1212 23:17:26.084444  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.084803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084864  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH client type: external
	I1212 23:17:26.084925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa (-rw-------)
	I1212 23:17:26.084971  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:26.084992  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | About to run SSH command:
	I1212 23:17:26.085006  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | exit 0
	I1212 23:17:26.175122  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:26.175455  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetConfigRaw
	I1212 23:17:26.176092  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.178747  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179016  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.179044  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179388  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:17:26.179602  128282 machine.go:88] provisioning docker machine ...
	I1212 23:17:26.179624  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:26.179853  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180033  128282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850839"
	I1212 23:17:26.180051  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180209  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.182470  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.182812  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.182848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.183003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.183193  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183374  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183538  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.183709  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.184100  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.184115  128282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850839 && echo "default-k8s-diff-port-850839" | sudo tee /etc/hostname
	I1212 23:17:26.313520  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850839
	
	I1212 23:17:26.313562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.316848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.317633  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317817  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.318047  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318229  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318344  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.318567  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.318888  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.318907  128282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850839/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:26.443174  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:26.443206  128282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:26.443224  128282 buildroot.go:174] setting up certificates
	I1212 23:17:26.443255  128282 provision.go:83] configureAuth start
	I1212 23:17:26.443273  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.443628  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.446155  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446467  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.446501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446568  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.449661  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450005  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.450041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450170  128282 provision.go:138] copyHostCerts
	I1212 23:17:26.450235  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:26.450258  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:26.450330  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:26.450442  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:26.450453  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:26.450483  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:26.450555  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:26.450565  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:26.450592  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:26.450656  128282 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850839 san=[192.168.39.180 192.168.39.180 localhost 127.0.0.1 minikube default-k8s-diff-port-850839]
	I1212 23:17:26.688969  128282 provision.go:172] copyRemoteCerts
	I1212 23:17:26.689035  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:26.689060  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.691731  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692004  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.692033  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692207  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.692441  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.692607  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.692736  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:26.781407  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:26.804712  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 23:17:26.827036  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:26.848977  128282 provision.go:86] duration metric: configureAuth took 405.706324ms
	I1212 23:17:26.849006  128282 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:26.849214  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:26.849310  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.851925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852281  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.852314  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852486  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.852679  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.852860  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.853003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.853172  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.853688  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.853711  128282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:27.183932  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:27.183961  128282 machine.go:91] provisioned docker machine in 1.004345653s
	I1212 23:17:27.183972  128282 start.go:300] post-start starting for "default-k8s-diff-port-850839" (driver="kvm2")
	I1212 23:17:27.183982  128282 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:27.183999  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.184348  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:27.184398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.187375  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187709  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.187759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187858  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.188054  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.188248  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.188400  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.277858  128282 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:27.282128  128282 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:27.282157  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:27.282244  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:27.282368  128282 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:27.282481  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:27.291755  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:27.313541  128282 start.go:303] post-start completed in 129.554425ms
	I1212 23:17:27.313563  128282 fix.go:56] fixHost completed within 25.388839079s
	I1212 23:17:27.313586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.316388  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316737  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.316760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316934  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.317141  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317343  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317540  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.317789  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:27.318143  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:27.318158  128282 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:27.436207  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423047.383892438
	
	I1212 23:17:27.436230  128282 fix.go:206] guest clock: 1702423047.383892438
	I1212 23:17:27.436237  128282 fix.go:219] Guest: 2023-12-12 23:17:27.383892438 +0000 UTC Remote: 2023-12-12 23:17:27.313567546 +0000 UTC m=+296.357388926 (delta=70.324892ms)
	I1212 23:17:27.436261  128282 fix.go:190] guest clock delta is within tolerance: 70.324892ms
	I1212 23:17:27.436266  128282 start.go:83] releasing machines lock for "default-k8s-diff-port-850839", held for 25.511577503s
	I1212 23:17:27.436289  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.436571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:27.439315  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439697  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.439730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440396  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440660  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440741  128282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:27.440793  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.440873  128282 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:27.440891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.443558  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443880  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443938  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.443965  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444132  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444338  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444369  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.444398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444741  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.444788  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444907  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.445073  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.528730  128282 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:27.563590  128282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:27.715220  128282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:27.722775  128282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:27.722883  128282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:27.743217  128282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:27.743264  128282 start.go:475] detecting cgroup driver to use...
	I1212 23:17:27.743344  128282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:27.759125  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:27.772532  128282 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:27.772602  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:27.786439  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:27.800413  128282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:27.905626  128282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:28.037279  128282 docker.go:219] disabling docker service ...
	I1212 23:17:28.037362  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:28.050670  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:28.063551  128282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:28.195512  128282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:28.306881  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:28.324506  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:28.344908  128282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:28.344992  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.354788  128282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:28.354883  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.364157  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.373415  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.383391  128282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:28.393203  128282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:28.401935  128282 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:28.402006  128282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:28.413618  128282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:28.426007  128282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:28.536725  128282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:28.711815  128282 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:28.711892  128282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:28.717242  128282 start.go:543] Will wait 60s for crictl version
	I1212 23:17:28.717306  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:17:28.724383  128282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:28.779687  128282 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:28.779781  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.834147  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.894131  128282 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:24.917347  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.917438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.928690  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.417259  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.417343  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.428544  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.917136  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.917212  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.927813  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.417826  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.417917  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.428147  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.917724  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.917803  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.929515  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.416997  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.417102  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.428180  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.917712  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.917830  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.931264  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.417370  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.417479  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.432478  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.916907  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.917039  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.932698  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:29.416883  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.416989  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.434138  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.895767  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:28.898899  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899233  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:28.899276  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899500  128282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:28.903950  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:28.917270  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:28.917383  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:28.956752  128282 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:28.956832  128282 ssh_runner.go:195] Run: which lz4
	I1212 23:17:28.961387  128282 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:28.965850  128282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:28.965925  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:30.869493  128282 crio.go:444] Took 1.908152 seconds to copy over tarball
	I1212 23:17:30.869580  128282 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:28.610279  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:31.088625  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:28.873664  127760 main.go:141] libmachine: (embed-certs-809120) Waiting to get IP...
	I1212 23:17:28.874489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:28.874895  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:28.874992  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:28.874848  129329 retry.go:31] will retry after 244.313261ms: waiting for machine to come up
	I1212 23:17:29.120442  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.120959  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.120997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.120852  129329 retry.go:31] will retry after 369.234988ms: waiting for machine to come up
	I1212 23:17:29.491516  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.492081  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.492124  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.492035  129329 retry.go:31] will retry after 448.746179ms: waiting for machine to come up
	I1212 23:17:29.942643  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.943286  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.943319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.943229  129329 retry.go:31] will retry after 520.98965ms: waiting for machine to come up
	I1212 23:17:30.465955  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:30.466468  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:30.466503  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:30.466430  129329 retry.go:31] will retry after 617.123622ms: waiting for machine to come up
	I1212 23:17:31.085159  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.085706  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.085746  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.085665  129329 retry.go:31] will retry after 853.539861ms: waiting for machine to come up
	I1212 23:17:31.940795  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.941240  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.941265  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.941169  129329 retry.go:31] will retry after 960.346145ms: waiting for machine to come up
	I1212 23:17:29.916897  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.917007  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.932055  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.417555  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.417657  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.433218  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.917841  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.917967  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.933255  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.417271  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.417357  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.429192  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.917804  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.917908  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.930333  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:32.393106  128156 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:32.393209  128156 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:32.393228  128156 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:32.393315  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:32.445688  128156 cri.go:89] found id: ""
	I1212 23:17:32.445774  128156 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:32.462269  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:32.473687  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:32.473768  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483043  128156 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483075  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:32.656758  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.442637  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.666131  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.751061  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.855861  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:33.855952  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:33.879438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.403317  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.178083  128282 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.308463726s)
	I1212 23:17:34.178124  128282 crio.go:451] Took 3.308601 seconds to extract the tarball
	I1212 23:17:34.178136  128282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:34.219740  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:34.268961  128282 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:34.268987  128282 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:34.269051  128282 ssh_runner.go:195] Run: crio config
	I1212 23:17:34.326979  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:34.327007  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:34.327033  128282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:34.327060  128282 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850839 NodeName:default-k8s-diff-port-850839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:34.327252  128282 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:34.327353  128282 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 23:17:34.327425  128282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:34.338300  128282 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:34.338385  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:34.347329  128282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 23:17:34.364120  128282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:34.380374  128282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 23:17:34.398219  128282 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:34.402134  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:34.415197  128282 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839 for IP: 192.168.39.180
	I1212 23:17:34.415252  128282 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:34.415436  128282 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:34.415472  128282 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:34.415540  128282 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.key
	I1212 23:17:34.415593  128282 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key.66237cde
	I1212 23:17:34.415626  128282 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key
	I1212 23:17:34.415739  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:34.415780  128282 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:34.415793  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:34.415841  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:34.415886  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:34.415931  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:34.415990  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:34.416632  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:34.440783  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:34.466303  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:34.491267  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:17:34.516659  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:34.542472  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:34.569367  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:34.599627  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:34.628781  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:34.655361  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:34.681199  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:34.706068  128282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:34.724142  128282 ssh_runner.go:195] Run: openssl version
	I1212 23:17:34.730108  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:34.740221  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745118  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745203  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.751091  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:34.761120  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:34.771456  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776480  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776559  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.782833  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:34.793597  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:34.804519  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809767  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809831  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.815838  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:34.825967  128282 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:34.831487  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:34.838280  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:34.845663  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:34.854810  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:34.862962  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:34.869641  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:34.876373  128282 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:34.876509  128282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:34.876579  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:34.918413  128282 cri.go:89] found id: ""
	I1212 23:17:34.918486  128282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:34.928267  128282 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:34.928305  128282 kubeadm.go:636] restartCluster start
	I1212 23:17:34.928396  128282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:34.938202  128282 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.939397  128282 kubeconfig.go:92] found "default-k8s-diff-port-850839" server: "https://192.168.39.180:8444"
	I1212 23:17:34.941945  128282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:34.953458  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.953552  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.965537  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.965561  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.965623  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.977454  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.478209  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.478292  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.505825  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.978537  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.978615  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.991422  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:33.591861  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:35.629760  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:32.902889  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:32.903556  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:32.903588  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:32.903500  129329 retry.go:31] will retry after 1.225619987s: waiting for machine to come up
	I1212 23:17:34.130560  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:34.131066  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:34.131098  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:34.131009  129329 retry.go:31] will retry after 1.544530633s: waiting for machine to come up
	I1212 23:17:35.677455  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:35.677916  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:35.677939  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:35.677902  129329 retry.go:31] will retry after 1.740004665s: waiting for machine to come up
	I1212 23:17:37.419743  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:37.420167  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:37.420203  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:37.420121  129329 retry.go:31] will retry after 2.220250897s: waiting for machine to come up
	I1212 23:17:34.902923  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.402835  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.903269  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.403728  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.903298  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.403775  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.903663  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.403892  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.429370  128156 api_server.go:72] duration metric: took 4.573508338s to wait for apiserver process to appear ...
	I1212 23:17:38.429402  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:38.429424  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.429952  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.430019  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.430455  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.931234  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:36.478240  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.478317  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.494437  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:36.978574  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.978654  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.995711  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.478404  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.478484  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.492356  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.977979  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.978123  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.993637  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.478102  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.478227  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.494347  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.977645  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.977771  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.994288  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.477795  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.477942  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.495986  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.978587  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.978695  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.994551  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.477958  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.478056  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.492956  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.978560  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.978663  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.994199  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.089524  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:40.591793  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:39.643094  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:39.643562  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:39.643603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:39.643508  129329 retry.go:31] will retry after 2.987735855s: waiting for machine to come up
	I1212 23:17:42.633477  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:42.633958  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:42.633993  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:42.633907  129329 retry.go:31] will retry after 3.131576961s: waiting for machine to come up
	I1212 23:17:41.334632  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:41.334685  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:41.334703  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.392719  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.392768  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.431413  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.445393  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.445428  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.930605  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.935880  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.935918  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.430551  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.435690  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:42.435720  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.931341  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.936295  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:17:42.944125  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:17:42.944163  128156 api_server.go:131] duration metric: took 4.514753942s to wait for apiserver health ...
	I1212 23:17:42.944173  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:42.944179  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:42.945951  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:42.947258  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:42.957745  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:42.978269  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:42.990231  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:42.990267  128156 system_pods.go:61] "coredns-76f75df574-2rdhr" [266c2440-a927-476c-b918-d0712834fc2f] Running
	I1212 23:17:42.990274  128156 system_pods.go:61] "etcd-no-preload-115023" [522ee237-12e0-4b83-9e20-05713cd87c7d] Running
	I1212 23:17:42.990281  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [9048886a-1b8b-407d-bd71-c5a850d88a5f] Running
	I1212 23:17:42.990287  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [4652e03f-2622-41d8-8791-bcc648d43432] Running
	I1212 23:17:42.990292  128156 system_pods.go:61] "kube-proxy-rqhmc" [b7514603-3389-4a38-b24a-e9c7948364bc] Running
	I1212 23:17:42.990299  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [7ce16391-9627-454b-b0de-27af47921997] Running
	I1212 23:17:42.990308  128156 system_pods.go:61] "metrics-server-57f55c9bc5-b42rv" [f27bd873-340b-4ae1-922a-ed8f52d558dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:42.990316  128156 system_pods.go:61] "storage-provisioner" [d9565f7f-dcf4-4e4d-88fd-e8a54bbf0e40] Running
	I1212 23:17:42.990327  128156 system_pods.go:74] duration metric: took 12.031472ms to wait for pod list to return data ...
	I1212 23:17:42.990347  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:42.994787  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:42.994817  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:42.994827  128156 node_conditions.go:105] duration metric: took 4.471497ms to run NodePressure ...
	I1212 23:17:42.994844  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.281299  128156 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:43.286299  128156 retry.go:31] will retry after 184.15509ms: kubelet not initialised
	I1212 23:17:43.476354  128156 retry.go:31] will retry after 533.806598ms: kubelet not initialised
	I1212 23:17:44.036349  128156 retry.go:31] will retry after 483.473669ms: kubelet not initialised
	I1212 23:17:41.477798  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.477898  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.493963  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:41.977991  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.978077  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.994590  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.478242  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.478334  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.495374  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.978495  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.978597  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.992337  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.477604  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.477667  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.491061  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.977638  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.977754  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.991654  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.478308  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:44.478409  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:44.494965  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.953708  128282 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:44.953763  128282 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:44.953780  128282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:44.953874  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:45.003440  128282 cri.go:89] found id: ""
	I1212 23:17:45.003519  128282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:45.021471  128282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:45.036134  128282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:45.036203  128282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049188  128282 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049214  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.197549  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.958707  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.088583  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.587947  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:47.588918  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.768814  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:45.769238  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:45.769270  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:45.769171  129329 retry.go:31] will retry after 3.722952815s: waiting for machine to come up
	I1212 23:17:44.529285  128156 kubeadm.go:787] kubelet initialised
	I1212 23:17:44.529310  128156 kubeadm.go:788] duration metric: took 1.247981757s waiting for restarted kubelet to initialise ...
	I1212 23:17:44.529321  128156 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:44.551751  128156 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:46.588427  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:48.589582  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:46.161702  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.251040  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.344286  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:46.344385  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.359646  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.875339  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.375793  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.875532  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.375394  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.875412  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.903144  128282 api_server.go:72] duration metric: took 2.558861066s to wait for apiserver process to appear ...
	I1212 23:17:48.903170  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:48.903188  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.903660  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:48.903697  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.904122  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:49.404880  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:50.088813  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.089208  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:49.494927  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495446  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has current primary IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495474  127760 main.go:141] libmachine: (embed-certs-809120) Found IP for machine: 192.168.50.221
	I1212 23:17:49.495489  127760 main.go:141] libmachine: (embed-certs-809120) Reserving static IP address...
	I1212 23:17:49.495884  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.495933  127760 main.go:141] libmachine: (embed-certs-809120) DBG | skip adding static IP to network mk-embed-certs-809120 - found existing host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"}
	I1212 23:17:49.495954  127760 main.go:141] libmachine: (embed-certs-809120) Reserved static IP address: 192.168.50.221
	I1212 23:17:49.495971  127760 main.go:141] libmachine: (embed-certs-809120) Waiting for SSH to be available...
	I1212 23:17:49.495987  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Getting to WaitForSSH function...
	I1212 23:17:49.498007  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498362  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.498398  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498514  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH client type: external
	I1212 23:17:49.498545  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa (-rw-------)
	I1212 23:17:49.498583  127760 main.go:141] libmachine: (embed-certs-809120) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:49.498598  127760 main.go:141] libmachine: (embed-certs-809120) DBG | About to run SSH command:
	I1212 23:17:49.498615  127760 main.go:141] libmachine: (embed-certs-809120) DBG | exit 0
	I1212 23:17:49.635655  127760 main.go:141] libmachine: (embed-certs-809120) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:49.636039  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetConfigRaw
	I1212 23:17:49.636795  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.639601  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640032  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.640059  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640367  127760 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/config.json ...
	I1212 23:17:49.640604  127760 machine.go:88] provisioning docker machine ...
	I1212 23:17:49.640629  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:49.640901  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641044  127760 buildroot.go:166] provisioning hostname "embed-certs-809120"
	I1212 23:17:49.641066  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641184  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.643599  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644050  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.644082  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644210  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.644439  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644612  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644791  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.644961  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.645333  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.645350  127760 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-809120 && echo "embed-certs-809120" | sudo tee /etc/hostname
	I1212 23:17:49.779263  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-809120
	
	I1212 23:17:49.779298  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.782329  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782739  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.782772  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782891  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.783133  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783306  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783466  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.783641  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.784029  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.784055  127760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-809120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-809120/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-809120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:49.914603  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:49.914641  127760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:49.914673  127760 buildroot.go:174] setting up certificates
	I1212 23:17:49.914686  127760 provision.go:83] configureAuth start
	I1212 23:17:49.914704  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.915021  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.918281  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918661  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.918715  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918849  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.921184  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921566  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.921603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921732  127760 provision.go:138] copyHostCerts
	I1212 23:17:49.921811  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:49.921824  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:49.921891  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:49.922013  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:49.922030  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:49.922061  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:49.922139  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:49.922149  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:49.922174  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:49.922255  127760 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-809120 san=[192.168.50.221 192.168.50.221 localhost 127.0.0.1 minikube embed-certs-809120]
	I1212 23:17:50.309293  127760 provision.go:172] copyRemoteCerts
	I1212 23:17:50.309361  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:50.309389  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.312319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.312745  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312942  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.313157  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.313362  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.313554  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.401075  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:50.426930  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 23:17:50.452785  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:50.480062  127760 provision.go:86] duration metric: configureAuth took 565.356144ms
	I1212 23:17:50.480098  127760 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:50.480377  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:50.480523  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.483621  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484035  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.484091  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484244  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.484455  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484603  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484728  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.484903  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.485264  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.485282  127760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:50.842779  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:50.842815  127760 machine.go:91] provisioned docker machine in 1.202192917s
	I1212 23:17:50.842831  127760 start.go:300] post-start starting for "embed-certs-809120" (driver="kvm2")
	I1212 23:17:50.842846  127760 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:50.842882  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:50.843282  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:50.843318  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.846267  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846670  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.846704  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846881  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.847102  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.847322  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.847496  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.934904  127760 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:50.939875  127760 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:50.939912  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:50.940000  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:50.940130  127760 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:50.940242  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:50.950764  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:50.977204  127760 start.go:303] post-start completed in 134.34972ms
	I1212 23:17:50.977232  127760 fix.go:56] fixHost completed within 23.540815255s
	I1212 23:17:50.977256  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.980553  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981029  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.981065  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981350  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.981611  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981766  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981917  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.982111  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.982448  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.982467  127760 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:51.096273  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423071.035304579
	
	I1212 23:17:51.096303  127760 fix.go:206] guest clock: 1702423071.035304579
	I1212 23:17:51.096311  127760 fix.go:219] Guest: 2023-12-12 23:17:51.035304579 +0000 UTC Remote: 2023-12-12 23:17:50.977236465 +0000 UTC m=+368.149225502 (delta=58.068114ms)
	I1212 23:17:51.096365  127760 fix.go:190] guest clock delta is within tolerance: 58.068114ms
	I1212 23:17:51.096375  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 23.659994787s
	I1212 23:17:51.096401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.096676  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:51.099275  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099683  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.099714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099864  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100586  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100671  127760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:51.100713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.100833  127760 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:51.100859  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.103808  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104103  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104214  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104268  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104379  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104415  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104405  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104615  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104620  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104817  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.104999  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.105058  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.105220  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.214734  127760 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:51.221556  127760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:51.379699  127760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:51.386319  127760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:51.386411  127760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:51.406594  127760 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:51.406623  127760 start.go:475] detecting cgroup driver to use...
	I1212 23:17:51.406707  127760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:51.421646  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:51.439574  127760 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:51.439651  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:51.456389  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:51.469380  127760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:51.576093  127760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:51.711468  127760 docker.go:219] disabling docker service ...
	I1212 23:17:51.711548  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:51.726747  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:51.739661  127760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:51.852974  127760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:51.973603  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:51.986983  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:52.004739  127760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:52.004809  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.017255  127760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:52.017345  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.029275  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.040398  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.051671  127760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:52.062036  127760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:52.070879  127760 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:52.070958  127760 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:52.087878  127760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:52.099487  127760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:52.246621  127760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:52.445182  127760 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:52.445259  127760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:52.450378  127760 start.go:543] Will wait 60s for crictl version
	I1212 23:17:52.450458  127760 ssh_runner.go:195] Run: which crictl
	I1212 23:17:52.454778  127760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:52.497569  127760 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:52.497679  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.562042  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.622289  127760 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:52.623892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:52.626997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627438  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:52.627474  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627731  127760 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:52.633387  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:52.647682  127760 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:52.647763  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:52.691061  127760 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:52.691138  127760 ssh_runner.go:195] Run: which lz4
	I1212 23:17:52.695575  127760 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:52.701228  127760 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:52.701265  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:53.042479  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.042516  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.042532  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.134475  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.134511  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.404943  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.413791  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.413829  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:53.904341  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.916515  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.916564  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:54.404229  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:54.414091  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:17:54.428577  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:17:54.428615  128282 api_server.go:131] duration metric: took 5.525437271s to wait for apiserver health ...
	I1212 23:17:54.428628  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:54.428638  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:54.430838  128282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:50.589742  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.593395  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:54.432405  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:54.450147  128282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:54.496866  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:54.519276  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:54.519327  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:17:54.519339  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:17:54.519354  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:17:54.519405  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:17:54.519418  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:17:54.519434  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:17:54.519447  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:54.519484  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:17:54.519498  128282 system_pods.go:74] duration metric: took 22.603103ms to wait for pod list to return data ...
	I1212 23:17:54.519512  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:54.526046  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:54.526083  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:54.526098  128282 node_conditions.go:105] duration metric: took 6.575834ms to run NodePressure ...
	I1212 23:17:54.526127  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:54.979886  128282 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991132  128282 kubeadm.go:787] kubelet initialised
	I1212 23:17:54.991169  128282 kubeadm.go:788] duration metric: took 11.248765ms waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991185  128282 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:54.999550  128282 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.008465  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008494  128282 pod_ready.go:81] duration metric: took 8.904589ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.008508  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008516  128282 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.020120  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020152  128282 pod_ready.go:81] duration metric: took 11.625987ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.020164  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020191  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.030018  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030056  128282 pod_ready.go:81] duration metric: took 9.856172ms waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.030074  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030083  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.039957  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.039997  128282 pod_ready.go:81] duration metric: took 9.902972ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.040015  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.040025  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.384922  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384964  128282 pod_ready.go:81] duration metric: took 344.925878ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.384979  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384988  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.791268  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791307  128282 pod_ready.go:81] duration metric: took 406.306307ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.791323  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791335  128282 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:56.186386  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186484  128282 pod_ready.go:81] duration metric: took 395.136012ms waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:56.186514  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186553  128282 pod_ready.go:38] duration metric: took 1.195355612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:56.186577  128282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:56.201434  128282 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:56.201462  128282 kubeadm.go:640] restartCluster took 21.273148264s
	I1212 23:17:56.201473  128282 kubeadm.go:406] StartCluster complete in 21.325115034s
	I1212 23:17:56.201496  128282 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.201592  128282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:56.204683  128282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.205095  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:56.205222  128282 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:56.205300  128282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205321  128282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205330  128282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205346  128282 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205361  128282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850839"
	W1212 23:17:56.205363  128282 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:56.205324  128282 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205445  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205360  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 23:17:56.205501  128282 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:56.205595  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205832  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205855  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205918  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205939  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205978  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.206077  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.215695  128282 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850839" context rescaled to 1 replicas
	I1212 23:17:56.215745  128282 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:56.219003  128282 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:56.221363  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.223684  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I1212 23:17:56.223901  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I1212 23:17:56.224018  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I1212 23:17:56.224530  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.224610  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225015  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225250  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.225260  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.225597  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.225990  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.226015  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.226308  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.226318  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.227368  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.227535  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.229799  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.229817  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.230427  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.232575  128282 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-850839"
	W1212 23:17:56.232593  128282 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:56.232623  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.233075  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233110  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.233880  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233930  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.245636  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1212 23:17:56.246119  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.246606  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.246623  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.246950  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.247098  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.248959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.251159  128282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:56.249918  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I1212 23:17:56.251294  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I1212 23:17:56.252768  128282 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.252783  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:56.252798  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.253647  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.253753  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.254219  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254233  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254340  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254347  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254690  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254749  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.255310  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.255335  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.256017  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256612  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.256639  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.257003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.257189  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.257402  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.258242  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.260097  128282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:54.115994  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:55.606824  127900 pod_ready.go:92] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.606858  127900 pod_ready.go:81] duration metric: took 34.03725266s waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.606872  127900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619163  127900 pod_ready.go:92] pod "etcd-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.619197  127900 pod_ready.go:81] duration metric: took 12.316097ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619212  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627282  127900 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.627313  127900 pod_ready.go:81] duration metric: took 8.08913ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627328  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634928  127900 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.634962  127900 pod_ready.go:81] duration metric: took 7.625067ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634978  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644531  127900 pod_ready.go:92] pod "kube-proxy-b6lz6" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.644558  127900 pod_ready.go:81] duration metric: took 9.571853ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644572  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985318  127900 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.985350  127900 pod_ready.go:81] duration metric: took 340.769789ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985366  127900 pod_ready.go:38] duration metric: took 34.420989087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:55.985382  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:55.985443  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:56.008913  127900 api_server.go:72] duration metric: took 42.305439195s to wait for apiserver process to appear ...
	I1212 23:17:56.009000  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:56.009030  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:56.017005  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:56.018170  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:56.018198  127900 api_server.go:131] duration metric: took 9.18267ms to wait for apiserver health ...
	I1212 23:17:56.018209  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:56.189360  127900 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:56.189394  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.189401  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.189408  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.189415  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.189421  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.189428  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.189437  127900 system_pods.go:61] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.189447  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.189462  127900 system_pods.go:74] duration metric: took 171.24435ms to wait for pod list to return data ...
	I1212 23:17:56.189477  127900 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:17:56.386180  127900 default_sa.go:45] found service account: "default"
	I1212 23:17:56.386211  127900 default_sa.go:55] duration metric: took 196.72345ms for default service account to be created ...
	I1212 23:17:56.386223  127900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:17:56.591313  127900 system_pods.go:86] 8 kube-system pods found
	I1212 23:17:56.591345  127900 system_pods.go:89] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.591354  127900 system_pods.go:89] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.591361  127900 system_pods.go:89] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.591369  127900 system_pods.go:89] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.591375  127900 system_pods.go:89] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.591382  127900 system_pods.go:89] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.591393  127900 system_pods.go:89] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.591401  127900 system_pods.go:89] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.591414  127900 system_pods.go:126] duration metric: took 205.183283ms to wait for k8s-apps to be running ...
	I1212 23:17:56.591429  127900 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:17:56.591482  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.611938  127900 system_svc.go:56] duration metric: took 20.493956ms WaitForService to wait for kubelet.
	I1212 23:17:56.611982  127900 kubeadm.go:581] duration metric: took 42.908516938s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:17:56.612014  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:56.785799  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:56.785841  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:56.785856  127900 node_conditions.go:105] duration metric: took 173.834506ms to run NodePressure ...
	I1212 23:17:56.785874  127900 start.go:228] waiting for startup goroutines ...
	I1212 23:17:56.785883  127900 start.go:233] waiting for cluster config update ...
	I1212 23:17:56.785898  127900 start.go:242] writing updated cluster config ...
	I1212 23:17:56.786402  127900 ssh_runner.go:195] Run: rm -f paused
	I1212 23:17:56.860461  127900 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 23:17:56.862646  127900 out.go:177] 
	W1212 23:17:56.864213  127900 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 23:17:56.865656  127900 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 23:17:56.867482  127900 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-549640" cluster and "default" namespace by default
	I1212 23:17:54.719978  127760 crio.go:444] Took 2.024442 seconds to copy over tarball
	I1212 23:17:54.720063  127760 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:56.261553  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:56.261577  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:56.261599  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.269093  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269478  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.269501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269778  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.269969  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.270192  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.270348  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.273173  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I1212 23:17:56.273551  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.274146  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.274170  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.274479  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.274657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.276224  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.276536  128282 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.276553  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:56.276572  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.279571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.279991  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.280030  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.280183  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.280395  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.280562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.280708  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.399444  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.447026  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:56.447058  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:56.453920  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.474280  128282 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:56.474316  128282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:17:56.509564  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:56.509598  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:56.575180  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:56.575217  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:56.641478  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:58.298873  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.89938362s)
	I1212 23:17:58.298942  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.298948  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.844991558s)
	I1212 23:17:58.298957  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.298986  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299063  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299326  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299356  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299367  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299387  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299439  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299448  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299463  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299479  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299489  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299673  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299690  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299850  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299879  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299899  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.308876  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.308905  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.309195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.309232  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.309241  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.418788  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.777244462s)
	I1212 23:17:58.418849  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.418866  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.419251  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.419285  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.419297  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.419308  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.420803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.420837  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.420857  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.420876  128282 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:58.591048  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:58.635345  128282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:17:54.595102  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:57.089235  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:58.815643  128282 addons.go:502] enable addons completed in 2.610454188s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:17:58.247448  127760 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.527350021s)
	I1212 23:17:58.247482  127760 crio.go:451] Took 3.527472 seconds to extract the tarball
	I1212 23:17:58.247500  127760 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:58.292239  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:58.347669  127760 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:58.347700  127760 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:58.347774  127760 ssh_runner.go:195] Run: crio config
	I1212 23:17:58.410577  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:17:58.410604  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:58.410627  127760 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:58.410658  127760 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-809120 NodeName:embed-certs-809120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:58.410874  127760 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-809120"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:58.410973  127760 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-809120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:58.411040  127760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:58.422571  127760 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:58.422655  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:58.432833  127760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:17:58.449996  127760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:58.468807  127760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 23:17:58.487568  127760 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:58.492547  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:58.505497  127760 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120 for IP: 192.168.50.221
	I1212 23:17:58.505548  127760 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:58.505759  127760 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:58.505820  127760 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:58.505891  127760 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/client.key
	I1212 23:17:58.585996  127760 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key.edab0817
	I1212 23:17:58.586114  127760 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key
	I1212 23:17:58.586288  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:58.586319  127760 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:58.586330  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:58.586356  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:58.586381  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:58.586418  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:58.586483  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:58.587254  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:58.615215  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:58.644237  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:58.670345  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:58.694986  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:58.719944  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:58.744701  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:58.768614  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:58.792922  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:58.815723  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:58.840192  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:58.864277  127760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:58.883069  127760 ssh_runner.go:195] Run: openssl version
	I1212 23:17:58.889642  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:58.901893  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906910  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906964  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.912769  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:58.924171  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:58.937368  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942604  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942681  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.948759  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:58.959757  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:58.971091  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976035  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976105  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.982246  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:58.994786  127760 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:58.999625  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:59.006233  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:59.012668  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:59.018959  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:59.025039  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:59.031628  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:59.037633  127760 kubeadm.go:404] StartCluster: {Name:embed-certs-809120 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:59.037779  127760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:59.037837  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:59.078977  127760 cri.go:89] found id: ""
	I1212 23:17:59.079065  127760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:59.090869  127760 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:59.090893  127760 kubeadm.go:636] restartCluster start
	I1212 23:17:59.090957  127760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:59.101950  127760 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.103088  127760 kubeconfig.go:92] found "embed-certs-809120" server: "https://192.168.50.221:8443"
	I1212 23:17:59.105562  127760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:59.115942  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.116006  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.128428  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.128452  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.128508  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.141075  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.641778  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.641854  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.654519  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.142171  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.142275  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.157160  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.641601  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.641719  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.654666  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.141184  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.141289  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.154899  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.641381  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.641501  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.654663  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.141186  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.141311  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.154140  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.642051  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.642157  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.655013  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.586733  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.588383  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:03.588956  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.092631  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:03.591508  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:04.090728  128282 node_ready.go:49] node "default-k8s-diff-port-850839" has status "Ready":"True"
	I1212 23:18:04.090757  128282 node_ready.go:38] duration metric: took 7.616412902s waiting for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:18:04.090775  128282 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:04.099347  128282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107155  128282 pod_ready.go:92] pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.107180  128282 pod_ready.go:81] duration metric: took 7.807715ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107192  128282 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113524  128282 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.113547  128282 pod_ready.go:81] duration metric: took 6.348789ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113557  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:03.141560  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.141654  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.156399  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:03.642066  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.642159  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.657347  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.141755  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.141837  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.158471  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.641645  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.641754  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.655061  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.141603  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.141699  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.154832  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.641246  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.641321  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.658753  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.141224  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.141299  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.156055  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.641506  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.641593  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.654083  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.141490  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.141570  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.154699  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.641257  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.641336  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.653935  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.590423  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.088212  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:06.134727  128282 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:07.136828  128282 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.136854  128282 pod_ready.go:81] duration metric: took 3.023290043s waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.136866  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151525  128282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.151554  128282 pod_ready.go:81] duration metric: took 14.680003ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151570  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293823  128282 pod_ready.go:92] pod "kube-proxy-wjrjj" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.293853  128282 pod_ready.go:81] duration metric: took 142.276185ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293864  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690262  128282 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.690291  128282 pod_ready.go:81] duration metric: took 396.420266ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690311  128282 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:10.001790  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.141984  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.142065  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.154365  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:08.641957  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.642070  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.654449  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:09.117052  127760 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:18:09.117093  127760 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:18:09.117131  127760 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:18:09.117195  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:18:09.165861  127760 cri.go:89] found id: ""
	I1212 23:18:09.165944  127760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:18:09.183729  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:18:09.194407  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:18:09.194487  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204575  127760 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204609  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:09.333758  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.380332  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04653446s)
	I1212 23:18:10.380364  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.603185  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.692919  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.776099  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:18:10.776189  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.795881  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.310083  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.809948  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.309977  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.810420  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.089789  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.589345  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:14.002715  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:13.310509  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:13.336361  127760 api_server.go:72] duration metric: took 2.560264825s to wait for apiserver process to appear ...
	I1212 23:18:13.336391  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:18:13.336411  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.319120  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.319159  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.319177  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.400337  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.400373  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.900625  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.906178  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:17.906233  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.401353  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.407217  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:18.407262  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.901435  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.913756  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:18:18.922517  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:18:18.922545  127760 api_server.go:131] duration metric: took 5.586147801s to wait for apiserver health ...
	I1212 23:18:18.922556  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:18:18.922563  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:18:18.924845  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:18:15.088538  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:17.587744  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:16.503957  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.002214  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:18.926570  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:18:18.976384  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:18:19.009915  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:18:19.035935  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:18:19.035986  127760 system_pods.go:61] "coredns-5dd5756b68-bz6cz" [4f53d6a6-c877-4f76-8aca-06ee891d9652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:18:19.035996  127760 system_pods.go:61] "etcd-embed-certs-809120" [260387de-7507-4962-b2fd-90cd6b39cae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:18:19.036005  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [94ded414-9813-4d0e-8de4-8ad5f6c16a33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:18:19.036017  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [c6574dde-8281-4dd2-bacd-c0412f1f592c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:18:19.036028  127760 system_pods.go:61] "kube-proxy-h7zgl" [87ca2a99-1da7-4a50-b4c7-f160cddf9ff3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:18:19.036042  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [fc6d3a5c-4056-47f8-9156-f5d370ba1de6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:18:19.036053  127760 system_pods.go:61] "metrics-server-57f55c9bc5-mxsd2" [d519663c-7921-4fc9-8d0f-ecf6d3cdbd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:18:19.036071  127760 system_pods.go:61] "storage-provisioner" [900e5cb9-7d27-4446-b15d-21f67fa3b629] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:18:19.036081  127760 system_pods.go:74] duration metric: took 26.13268ms to wait for pod list to return data ...
	I1212 23:18:19.036093  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:18:19.045885  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:18:19.045930  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:18:19.045945  127760 node_conditions.go:105] duration metric: took 9.842707ms to run NodePressure ...
	I1212 23:18:19.045969  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:19.587096  127760 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593698  127760 kubeadm.go:787] kubelet initialised
	I1212 23:18:19.593722  127760 kubeadm.go:788] duration metric: took 6.595854ms waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593730  127760 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:19.602567  127760 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:21.623798  127760 pod_ready.go:102] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.590788  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:22.089448  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:24.090497  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:21.501964  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.502814  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:26.000629  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.124864  127760 pod_ready.go:92] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:23.124888  127760 pod_ready.go:81] duration metric: took 3.52228673s waiting for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:23.124898  127760 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:25.143967  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.146069  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.645645  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.645671  127760 pod_ready.go:81] duration metric: took 4.520766787s waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.645686  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652369  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.652392  127760 pod_ready.go:81] duration metric: took 6.700076ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652402  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587478  128156 pod_ready.go:92] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.587505  128156 pod_ready.go:81] duration metric: took 40.035726456s waiting for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587518  128156 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.596994  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.597015  128156 pod_ready.go:81] duration metric: took 9.490538ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.597027  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601904  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.601930  128156 pod_ready.go:81] duration metric: took 4.894855ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601942  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608643  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.608662  128156 pod_ready.go:81] duration metric: took 6.712079ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608673  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614595  128156 pod_ready.go:92] pod "kube-proxy-rqhmc" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.614624  128156 pod_ready.go:81] duration metric: took 5.945157ms waiting for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614632  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985244  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.985272  128156 pod_ready.go:81] duration metric: took 370.631498ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985282  128156 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.293707  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.293859  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:28.500792  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:31.002513  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.676207  127760 pod_ready.go:102] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:32.172306  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.172339  127760 pod_ready.go:81] duration metric: took 4.519929269s waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.172355  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178133  127760 pod_ready.go:92] pod "kube-proxy-h7zgl" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.178154  127760 pod_ready.go:81] duration metric: took 5.793304ms waiting for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178163  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184283  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.184305  127760 pod_ready.go:81] duration metric: took 6.134863ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184319  127760 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:31.792415  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.793837  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.499687  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:35.500853  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:34.448290  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.948646  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.296844  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.793406  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:40.501951  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.949791  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.448832  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.294594  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.295134  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.000673  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.000747  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.452098  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.947475  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.793152  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.793282  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.003229  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.499682  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.949034  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:50.449118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.455176  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.793896  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.293413  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.293611  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:51.502870  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.000866  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.002047  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.948058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.950946  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.791908  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.792808  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.500328  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.000549  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:59.449089  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.948622  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:00.793090  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.294337  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.002131  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.500315  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.948920  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.949566  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.792376  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.793999  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:08.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.500002  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.950271  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.450074  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.292457  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.294375  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.503977  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:15.000631  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.948486  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.951220  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.448916  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.792888  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:16.793429  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.293010  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.000916  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.499770  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.449088  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.949856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.293433  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.792996  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.506787  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.507411  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:26.001279  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.950269  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.952818  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.793527  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.294892  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.499823  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.500142  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.448303  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.449512  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.793364  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.293202  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.001883  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.500561  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:32.948419  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:34.948716  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:36.949202  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.293744  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:37.294070  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:38.001116  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:40.001502  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.449215  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:41.948577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.793176  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.292783  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.501401  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:45.003364  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:43.950039  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.449043  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:44.792361  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.793184  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.294980  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:47.500147  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.501096  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:48.449912  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:50.950549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:51.794547  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.298465  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.000382  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.005736  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.950635  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:55.449330  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:57.449700  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.792615  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.499865  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:58.499980  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:00.500389  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.950151  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:02.447970  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:01.793306  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.793698  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.001300  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.499370  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:04.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:06.450549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.793804  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.793899  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.500520  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.000481  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:08.950058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:11.449345  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.293157  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.293642  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.500064  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.500937  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:13.949163  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:16.448489  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.793066  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.293467  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.293785  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.003921  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.501044  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:18.953218  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.449082  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.792447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.794479  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.999979  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:24.001269  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.001308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.948517  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:25.949879  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.292488  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.293405  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.499717  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.500472  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.448633  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.455346  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.293436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.296063  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:33.004484  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:35.500190  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.949307  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.949549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.447994  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.792727  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.292297  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.293185  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.501094  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:40.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.448914  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.449574  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.296498  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.794079  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:42.000667  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:44.500084  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.949370  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.448365  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.293571  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.795374  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.501287  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:49.000247  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.002102  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.449326  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:50.950049  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.295712  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.796436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.500278  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.500483  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:52.950509  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.448194  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:57.448444  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:56.293432  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.791909  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.000148  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.000718  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:59.448627  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:01.449178  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.793652  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.798916  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.501103  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:04.504053  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:03.948376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.949118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.293868  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.796468  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.000140  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:09.500040  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.949917  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.449692  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.296954  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.793159  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:11.500724  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:13.501811  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:16.000506  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.948932  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:14.951174  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.448985  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:15.294394  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.792822  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:18.501242  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.000679  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:19.449857  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.949137  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:20.293991  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:22.793476  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.501237  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.001069  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.950208  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.449036  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:25.294562  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:27.792099  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.500763  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.000635  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.947918  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:30.949180  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:29.793559  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.793709  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:34.292407  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:33.001948  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.002761  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:32.949352  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.448233  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.449470  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:36.292723  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:38.792944  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.501308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.001944  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:39.948613  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:41.953252  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.793938  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.796054  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.499956  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.504598  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.453963  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.952856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:45.292988  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:47.792829  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.999714  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.000749  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.000798  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.448592  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.461405  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.793084  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:52.293550  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.001475  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:55.499894  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.952376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.451000  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:54.793373  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.796557  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:59.293830  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:57.501136  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.000501  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:58.949246  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.949331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:01.792604  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.793283  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:02.501611  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.001210  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.449006  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.449356  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:06.291970  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:08.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.502381  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.690392  128282 pod_ready.go:81] duration metric: took 4m0.000056495s waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:07.690437  128282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:07.690447  128282 pod_ready.go:38] duration metric: took 4m3.599656754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:07.690468  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:22:07.690503  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:07.690560  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:07.752216  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:07.752249  128282 cri.go:89] found id: ""
	I1212 23:22:07.752258  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:07.752309  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.757000  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:07.757068  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:07.801367  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:07.801398  128282 cri.go:89] found id: ""
	I1212 23:22:07.801409  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:07.801470  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.806744  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:07.806804  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:07.850495  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:07.850530  128282 cri.go:89] found id: ""
	I1212 23:22:07.850538  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:07.850588  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.855144  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:07.855226  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:07.900092  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:07.900121  128282 cri.go:89] found id: ""
	I1212 23:22:07.900131  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:07.900199  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.904280  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:07.904357  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:07.945991  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:07.946019  128282 cri.go:89] found id: ""
	I1212 23:22:07.946034  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:07.946101  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.951095  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:07.951168  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:07.992586  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:07.992611  128282 cri.go:89] found id: ""
	I1212 23:22:07.992619  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:07.992667  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.996887  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:07.996945  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:08.038769  128282 cri.go:89] found id: ""
	I1212 23:22:08.038810  128282 logs.go:284] 0 containers: []
	W1212 23:22:08.038820  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:08.038829  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:08.038892  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:08.081167  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.081202  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.081209  128282 cri.go:89] found id: ""
	I1212 23:22:08.081225  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:08.081282  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.085740  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.089816  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:08.089836  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:08.137243  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:08.137274  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:08.180654  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:08.180686  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:08.240646  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:08.240684  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:08.289713  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:08.289753  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:08.440863  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:08.440902  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:08.505477  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:08.505516  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.561373  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:08.561411  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:08.626446  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:08.626482  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:08.681726  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:08.681769  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:08.703440  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:08.703468  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.739960  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:08.739998  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:09.213821  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:09.213867  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:07.949577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:09.950086  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.449579  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:10.793412  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.794447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:11.771447  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:22:11.787326  128282 api_server.go:72] duration metric: took 4m15.571529815s to wait for apiserver process to appear ...
	I1212 23:22:11.787355  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:22:11.787395  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:11.787459  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:11.841146  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:11.841178  128282 cri.go:89] found id: ""
	I1212 23:22:11.841199  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:11.841263  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.845844  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:11.845917  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:11.895757  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:11.895780  128282 cri.go:89] found id: ""
	I1212 23:22:11.895789  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:11.895846  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.900575  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:11.900641  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:11.941848  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:11.941872  128282 cri.go:89] found id: ""
	I1212 23:22:11.941882  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:11.941962  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.948119  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:11.948192  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:11.997102  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:11.997126  128282 cri.go:89] found id: ""
	I1212 23:22:11.997135  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:11.997189  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.002683  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:12.002750  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:12.042120  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:12.042144  128282 cri.go:89] found id: ""
	I1212 23:22:12.042159  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:12.042225  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.047068  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:12.047144  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:12.092055  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:12.092078  128282 cri.go:89] found id: ""
	I1212 23:22:12.092087  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:12.092137  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.097642  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:12.097713  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:12.137481  128282 cri.go:89] found id: ""
	I1212 23:22:12.137521  128282 logs.go:284] 0 containers: []
	W1212 23:22:12.137532  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:12.137542  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:12.137607  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:12.183712  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:12.183735  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.183740  128282 cri.go:89] found id: ""
	I1212 23:22:12.183747  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:12.183813  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.188656  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.193613  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:12.193639  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:12.206911  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:12.206941  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:12.258294  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:12.258335  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.300901  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:12.300934  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:12.765702  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:12.765746  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:12.909101  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:12.909138  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:12.967049  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:12.967083  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:13.010895  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:13.010930  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:13.062291  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:13.062324  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:13.107276  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:13.107320  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:13.166395  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:13.166448  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:13.212812  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:13.212853  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:13.260977  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:13.261022  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:15.816287  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:22:15.821554  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:22:15.822925  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:22:15.822945  128282 api_server.go:131] duration metric: took 4.035583432s to wait for apiserver health ...
	I1212 23:22:15.822954  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:22:15.822976  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:15.823024  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:15.870940  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:15.870981  128282 cri.go:89] found id: ""
	I1212 23:22:15.870993  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:15.871062  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.876167  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:15.876244  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:15.916642  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:15.916671  128282 cri.go:89] found id: ""
	I1212 23:22:15.916682  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:15.916747  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.921173  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:15.921238  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:15.963421  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:15.963449  128282 cri.go:89] found id: ""
	I1212 23:22:15.963461  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:15.963521  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.967747  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:15.967821  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:14.949925  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.949999  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:15.294181  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:17.793324  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.011046  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.011071  128282 cri.go:89] found id: ""
	I1212 23:22:16.011079  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:16.011128  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.015592  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:16.015659  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:16.058065  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:16.058092  128282 cri.go:89] found id: ""
	I1212 23:22:16.058103  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:16.058157  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.062334  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:16.062398  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:16.105032  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:16.105062  128282 cri.go:89] found id: ""
	I1212 23:22:16.105074  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:16.105140  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.109674  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:16.109728  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:16.151188  128282 cri.go:89] found id: ""
	I1212 23:22:16.151221  128282 logs.go:284] 0 containers: []
	W1212 23:22:16.151230  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:16.151246  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:16.151314  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:16.196149  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:16.196191  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.196199  128282 cri.go:89] found id: ""
	I1212 23:22:16.196209  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:16.196272  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.201690  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.205939  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:16.205970  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:16.358186  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:16.358236  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:16.404737  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:16.404780  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.449040  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:16.449069  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.491141  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:16.491173  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:16.860522  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:16.860578  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:16.877982  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:16.878030  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:16.923301  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:16.923338  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:16.965351  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:16.965382  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:17.024559  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:17.024603  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:17.079193  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:17.079229  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:17.123956  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:17.124003  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:17.202000  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:17.202043  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:19.755866  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:22:19.755901  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.755907  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.755914  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.755922  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.755929  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.755936  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.755946  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.755954  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.755963  128282 system_pods.go:74] duration metric: took 3.933003633s to wait for pod list to return data ...
	I1212 23:22:19.755977  128282 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:22:19.758618  128282 default_sa.go:45] found service account: "default"
	I1212 23:22:19.758639  128282 default_sa.go:55] duration metric: took 2.655294ms for default service account to be created ...
	I1212 23:22:19.758647  128282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:22:19.764376  128282 system_pods.go:86] 8 kube-system pods found
	I1212 23:22:19.764398  128282 system_pods.go:89] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.764404  128282 system_pods.go:89] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.764409  128282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.764414  128282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.764418  128282 system_pods.go:89] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.764432  128282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.764444  128282 system_pods.go:89] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.764454  128282 system_pods.go:89] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.764464  128282 system_pods.go:126] duration metric: took 5.811076ms to wait for k8s-apps to be running ...
	I1212 23:22:19.764475  128282 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:22:19.764531  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:19.781048  128282 system_svc.go:56] duration metric: took 16.561836ms WaitForService to wait for kubelet.
	I1212 23:22:19.781100  128282 kubeadm.go:581] duration metric: took 4m23.565309829s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:22:19.781129  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:22:19.784205  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:22:19.784229  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:22:19.784240  128282 node_conditions.go:105] duration metric: took 3.105926ms to run NodePressure ...
	I1212 23:22:19.784253  128282 start.go:228] waiting for startup goroutines ...
	I1212 23:22:19.784259  128282 start.go:233] waiting for cluster config update ...
	I1212 23:22:19.784269  128282 start.go:242] writing updated cluster config ...
	I1212 23:22:19.784545  128282 ssh_runner.go:195] Run: rm -f paused
	I1212 23:22:19.840938  128282 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:22:19.842885  128282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850839" cluster and "default" namespace by default
	I1212 23:22:19.449331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:21.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:20.294156  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:22.792746  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:23.949834  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:26.452555  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.793601  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.985518  128156 pod_ready.go:81] duration metric: took 4m0.000203674s waiting for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:24.985551  128156 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:24.985571  128156 pod_ready.go:38] duration metric: took 4m40.456239368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:24.985600  128156 kubeadm.go:640] restartCluster took 5m2.616770336s
	W1212 23:22:24.985660  128156 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:24.985690  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:28.949293  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:31.449689  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:32.184476  127760 pod_ready.go:81] duration metric: took 4m0.000136331s waiting for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:32.184516  127760 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:32.184559  127760 pod_ready.go:38] duration metric: took 4m12.59080567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:32.184598  127760 kubeadm.go:640] restartCluster took 4m33.093698567s
	W1212 23:22:32.184674  127760 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:32.184715  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:39.117782  128156 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.132057077s)
	I1212 23:22:39.117868  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:39.132912  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:39.143453  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:39.153628  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:39.153684  128156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:39.374201  128156 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:46.310264  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.12551082s)
	I1212 23:22:46.310350  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:46.327577  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:46.339177  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:46.350355  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:46.350407  127760 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:46.414859  127760 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:22:46.414971  127760 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:46.599881  127760 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:46.600039  127760 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:46.600208  127760 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:46.867542  127760 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:46.869398  127760 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:46.869528  127760 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:46.869659  127760 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:46.869770  127760 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:46.869933  127760 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:46.870496  127760 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:46.871021  127760 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:46.871802  127760 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:46.873187  127760 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:46.874737  127760 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:46.876316  127760 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:46.877713  127760 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:46.877769  127760 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:47.211156  127760 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:47.370652  127760 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:47.491927  127760 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:47.746007  127760 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:47.746996  127760 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:47.749868  127760 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:47.751553  127760 out.go:204]   - Booting up control plane ...
	I1212 23:22:47.751724  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:47.751814  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:47.752662  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:47.770296  127760 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:47.770438  127760 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:47.770546  127760 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.362262  128156 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 23:22:51.362341  128156 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:51.362461  128156 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:51.362593  128156 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:51.362706  128156 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:51.362781  128156 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:51.364439  128156 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:51.364561  128156 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:51.364660  128156 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:51.364758  128156 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:51.364840  128156 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:51.364971  128156 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:51.365060  128156 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:51.365137  128156 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:51.365215  128156 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:51.365320  128156 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:51.365425  128156 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:51.365479  128156 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:51.365553  128156 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:51.365626  128156 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:51.365706  128156 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 23:22:51.365778  128156 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:51.365859  128156 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:51.365936  128156 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:51.366046  128156 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:51.366131  128156 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:51.368190  128156 out.go:204]   - Booting up control plane ...
	I1212 23:22:51.368316  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:51.368421  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:51.368517  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:51.368649  128156 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:51.368763  128156 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:51.368813  128156 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.369013  128156 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.369107  128156 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503652 seconds
	I1212 23:22:51.369231  128156 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:51.369390  128156 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:51.369465  128156 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:51.369709  128156 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-115023 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:51.369780  128156 kubeadm.go:322] [bootstrap-token] Using token: agyzoj.wkr94b17dt19k7yx
	I1212 23:22:51.371110  128156 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:51.371306  128156 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:51.371421  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:51.371643  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:51.371825  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:51.371975  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:51.372085  128156 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:51.372226  128156 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:51.372285  128156 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:51.372344  128156 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:51.372353  128156 kubeadm.go:322] 
	I1212 23:22:51.372425  128156 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:51.372437  128156 kubeadm.go:322] 
	I1212 23:22:51.372529  128156 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:51.372540  128156 kubeadm.go:322] 
	I1212 23:22:51.372571  128156 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:51.372645  128156 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:51.372711  128156 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:51.372720  128156 kubeadm.go:322] 
	I1212 23:22:51.372793  128156 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:51.372804  128156 kubeadm.go:322] 
	I1212 23:22:51.372861  128156 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:51.372871  128156 kubeadm.go:322] 
	I1212 23:22:51.372933  128156 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:51.373050  128156 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:51.373137  128156 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:51.373149  128156 kubeadm.go:322] 
	I1212 23:22:51.373248  128156 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:51.373345  128156 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:51.373356  128156 kubeadm.go:322] 
	I1212 23:22:51.373456  128156 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373583  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:51.373613  128156 kubeadm.go:322] 	--control-plane 
	I1212 23:22:51.373623  128156 kubeadm.go:322] 
	I1212 23:22:51.373724  128156 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:51.373739  128156 kubeadm.go:322] 
	I1212 23:22:51.373842  128156 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373985  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:51.374006  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:22:51.374015  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:51.375563  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:47.945457  127760 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.376861  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:51.414215  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:51.484549  128156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:51.484635  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.484696  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=no-preload-115023 minikube.k8s.io/updated_at=2023_12_12T23_22_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.564599  128156 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:51.924093  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.026923  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.628483  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.128275  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.628006  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:54.127897  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.450625  127760 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504757 seconds
	I1212 23:22:56.450779  127760 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:56.468441  127760 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:57.003074  127760 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:57.003292  127760 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-809120 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:57.518097  127760 kubeadm.go:322] [bootstrap-token] Using token: ichlu8.wzw1wbhrbc06xbtw
	I1212 23:22:57.519536  127760 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:57.519639  127760 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:57.528652  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:57.538325  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:57.542226  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:57.551395  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:57.556988  127760 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:57.573462  127760 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:57.833933  127760 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:57.949764  127760 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:57.949788  127760 kubeadm.go:322] 
	I1212 23:22:57.949888  127760 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:57.949913  127760 kubeadm.go:322] 
	I1212 23:22:57.950013  127760 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:57.950036  127760 kubeadm.go:322] 
	I1212 23:22:57.950079  127760 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:57.950155  127760 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:57.950228  127760 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:57.950240  127760 kubeadm.go:322] 
	I1212 23:22:57.950301  127760 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:57.950311  127760 kubeadm.go:322] 
	I1212 23:22:57.950375  127760 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:57.950385  127760 kubeadm.go:322] 
	I1212 23:22:57.950468  127760 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:57.950578  127760 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:57.950678  127760 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:57.950702  127760 kubeadm.go:322] 
	I1212 23:22:57.950818  127760 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:57.950916  127760 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:57.950926  127760 kubeadm.go:322] 
	I1212 23:22:57.951054  127760 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951199  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:57.951231  127760 kubeadm.go:322] 	--control-plane 
	I1212 23:22:57.951266  127760 kubeadm.go:322] 
	I1212 23:22:57.951386  127760 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:57.951396  127760 kubeadm.go:322] 
	I1212 23:22:57.951494  127760 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951619  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:57.952303  127760 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:57.952326  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:22:57.952337  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:57.954692  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:54.628965  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.127922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.627980  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.128047  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.628471  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.128456  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.628284  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.128528  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.628480  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.128296  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.955898  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:57.975567  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:58.044612  127760 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:58.044741  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.044746  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=embed-certs-809120 minikube.k8s.io/updated_at=2023_12_12T23_22_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.158788  127760 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:58.375305  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.487117  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.075465  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.575132  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.075781  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.575754  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.075376  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.575524  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.075163  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.574821  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.628475  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.128509  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.628837  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.128959  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.627976  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.128077  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.628493  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.128203  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.628549  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.127987  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.627922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.756882  128156 kubeadm.go:1088] duration metric: took 13.272316322s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:04.756928  128156 kubeadm.go:406] StartCluster complete in 5m42.440524658s
	I1212 23:23:04.756955  128156 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.757069  128156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:04.759734  128156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.760081  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:04.760220  128156 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:04.760311  128156 addons.go:69] Setting storage-provisioner=true in profile "no-preload-115023"
	I1212 23:23:04.760325  128156 addons.go:69] Setting default-storageclass=true in profile "no-preload-115023"
	I1212 23:23:04.760358  128156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-115023"
	I1212 23:23:04.760385  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:23:04.760332  128156 addons.go:231] Setting addon storage-provisioner=true in "no-preload-115023"
	W1212 23:23:04.760426  128156 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:04.760497  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760337  128156 addons.go:69] Setting metrics-server=true in profile "no-preload-115023"
	I1212 23:23:04.760525  128156 addons.go:231] Setting addon metrics-server=true in "no-preload-115023"
	W1212 23:23:04.760538  128156 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:04.760577  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760759  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760787  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.760953  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760986  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760995  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.761010  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.777848  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I1212 23:23:04.778063  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1212 23:23:04.778315  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778479  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778613  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38509
	I1212 23:23:04.778931  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778945  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778952  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.778957  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779020  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.779302  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779561  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.779726  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.779749  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779929  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.779961  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.780516  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.781173  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.781207  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.783399  128156 addons.go:231] Setting addon default-storageclass=true in "no-preload-115023"
	W1212 23:23:04.783422  128156 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:04.783452  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.783871  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.783906  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.797493  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 23:23:04.797741  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I1212 23:23:04.798102  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798132  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798613  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798630  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.798956  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798985  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.799262  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799438  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.799639  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.801934  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.802007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.803861  128156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:04.802341  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I1212 23:23:04.806911  128156 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:04.805759  128156 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:04.806058  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.808825  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:04.808833  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:04.808848  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:04.808856  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.808863  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.809266  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.809281  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.809624  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.810352  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.810381  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.813139  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813629  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.813654  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813828  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813882  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814303  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.814333  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.814148  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814542  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814625  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.814797  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814855  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.814954  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.815127  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.823127  128156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-115023" context rescaled to 1 replicas
	I1212 23:23:04.823174  128156 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:04.824991  128156 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:04.826596  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:04.827821  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I1212 23:23:04.828256  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.828820  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.828845  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.829390  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.829741  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.834167  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.834521  128156 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:04.834539  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:04.834563  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.838055  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838555  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.838587  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838772  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.838964  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.839119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.839284  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.972964  128156 node_ready.go:35] waiting up to 6m0s for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.973014  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:04.998182  128156 node_ready.go:49] node "no-preload-115023" has status "Ready":"True"
	I1212 23:23:04.998214  128156 node_ready.go:38] duration metric: took 25.214785ms waiting for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.998226  128156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:05.012036  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:05.027954  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:05.027977  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:05.063451  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:05.076403  128156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:05.119924  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:05.119957  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:05.216413  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.216443  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:05.285434  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.817542  128156 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:06.316381  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.252894593s)
	I1212 23:23:06.316378  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.304291472s)
	I1212 23:23:06.316446  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316460  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316491  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316509  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316903  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.316959  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.316966  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.316986  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316916  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317010  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.317022  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316995  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317032  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317327  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.317387  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317408  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.318858  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.318881  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.366104  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.366135  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.366427  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.366481  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.366492  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618093  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332604197s)
	I1212 23:23:06.618161  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618183  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618643  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.618665  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618676  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618684  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618845  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620326  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620340  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.620363  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.620384  128156 addons.go:467] Verifying addon metrics-server=true in "no-preload-115023"
	I1212 23:23:06.622226  128156 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:03.075069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.575772  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.074921  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.575481  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.075785  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.575855  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.075276  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.575017  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.075100  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.575342  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.623716  128156 addons.go:502] enable addons completed in 1.863496659s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:07.165490  128156 pod_ready.go:102] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:08.161341  128156 pod_ready.go:92] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.161380  128156 pod_ready.go:81] duration metric: took 3.084948492s waiting for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.161395  128156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169259  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.169294  128156 pod_ready.go:81] duration metric: took 7.890109ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169309  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176068  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.176097  128156 pod_ready.go:81] duration metric: took 6.779109ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176111  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183056  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.183085  128156 pod_ready.go:81] duration metric: took 6.964809ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183099  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066100  128156 pod_ready.go:92] pod "kube-proxy-qs95k" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.066123  128156 pod_ready.go:81] duration metric: took 883.017234ms waiting for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066132  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357841  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.357874  128156 pod_ready.go:81] duration metric: took 291.734639ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357884  128156 pod_ready.go:38] duration metric: took 4.359648281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:09.357904  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:09.357970  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:09.372791  128156 api_server.go:72] duration metric: took 4.549577037s to wait for apiserver process to appear ...
	I1212 23:23:09.372820  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:09.372841  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:23:09.378375  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:23:09.379855  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:23:09.379882  128156 api_server.go:131] duration metric: took 7.054126ms to wait for apiserver health ...
	I1212 23:23:09.379893  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:09.561188  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:09.561216  128156 system_pods.go:61] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.561221  128156 system_pods.go:61] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.561225  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.561229  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.561235  128156 system_pods.go:61] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.561239  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.561245  128156 system_pods.go:61] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.561249  128156 system_pods.go:61] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.561257  128156 system_pods.go:74] duration metric: took 181.358443ms to wait for pod list to return data ...
	I1212 23:23:09.561265  128156 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:09.756864  128156 default_sa.go:45] found service account: "default"
	I1212 23:23:09.756894  128156 default_sa.go:55] duration metric: took 195.622122ms for default service account to be created ...
	I1212 23:23:09.756905  128156 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:09.960670  128156 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:09.960700  128156 system_pods.go:89] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.960705  128156 system_pods.go:89] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.960710  128156 system_pods.go:89] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.960715  128156 system_pods.go:89] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.960719  128156 system_pods.go:89] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.960723  128156 system_pods.go:89] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.960729  128156 system_pods.go:89] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.960735  128156 system_pods.go:89] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.960744  128156 system_pods.go:126] duration metric: took 203.831934ms to wait for k8s-apps to be running ...
	I1212 23:23:09.960754  128156 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:09.960805  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:09.974511  128156 system_svc.go:56] duration metric: took 13.742619ms WaitForService to wait for kubelet.
	I1212 23:23:09.974543  128156 kubeadm.go:581] duration metric: took 5.15133848s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:09.974571  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:10.158679  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:10.158708  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:10.158717  128156 node_conditions.go:105] duration metric: took 184.140544ms to run NodePressure ...
	I1212 23:23:10.158730  128156 start.go:228] waiting for startup goroutines ...
	I1212 23:23:10.158736  128156 start.go:233] waiting for cluster config update ...
	I1212 23:23:10.158746  128156 start.go:242] writing updated cluster config ...
	I1212 23:23:10.158996  128156 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:10.222646  128156 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 23:23:10.224867  128156 out.go:177] * Done! kubectl is now configured to use "no-preload-115023" cluster and "default" namespace by default
	I1212 23:23:08.075026  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:08.574992  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.075693  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.575069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.075713  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.575464  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.075090  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.250257  127760 kubeadm.go:1088] duration metric: took 13.205579442s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:11.250290  127760 kubeadm.go:406] StartCluster complete in 5m12.212668558s
	I1212 23:23:11.250312  127760 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.250409  127760 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:11.253977  127760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.254241  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:11.254250  127760 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:11.254337  127760 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-809120"
	I1212 23:23:11.254351  127760 addons.go:69] Setting default-storageclass=true in profile "embed-certs-809120"
	I1212 23:23:11.254358  127760 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-809120"
	W1212 23:23:11.254366  127760 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:11.254369  127760 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-809120"
	I1212 23:23:11.254422  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254431  127760 addons.go:69] Setting metrics-server=true in profile "embed-certs-809120"
	I1212 23:23:11.254457  127760 addons.go:231] Setting addon metrics-server=true in "embed-certs-809120"
	W1212 23:23:11.254466  127760 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:11.254466  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:23:11.254510  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254798  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254802  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254845  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.254902  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254933  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.255058  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.272689  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I1212 23:23:11.272926  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I1212 23:23:11.273095  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273297  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273444  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I1212 23:23:11.273710  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273722  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.273784  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273935  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273947  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274917  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.274942  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.275403  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.275452  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.275615  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.275776  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.276164  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.276199  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.279953  127760 addons.go:231] Setting addon default-storageclass=true in "embed-certs-809120"
	W1212 23:23:11.279984  127760 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:11.280016  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.280439  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.280488  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.296262  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I1212 23:23:11.296273  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I1212 23:23:11.296731  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.296839  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.297284  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297296  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297304  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297315  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297662  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297722  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297820  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.297867  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1212 23:23:11.297876  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.298202  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.298805  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.298823  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.299106  127760 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-809120" context rescaled to 1 replicas
	I1212 23:23:11.299151  127760 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:11.300876  127760 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:11.299808  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.299838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.299990  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.302374  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:11.303907  127760 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:11.305369  127760 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:11.302872  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.307972  127760 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.307992  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:11.308012  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306693  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:11.308064  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:11.308088  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306729  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.312550  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312826  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.312853  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313337  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.313477  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.313493  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313524  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.313558  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313610  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.313772  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313988  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.314165  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.314287  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.334457  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1212 23:23:11.335025  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.335687  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.335719  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.336130  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.336356  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.338062  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.338356  127760 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.338380  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:11.338407  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.341489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342079  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.342119  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342283  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.342499  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.342642  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.342823  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.562179  127760 node_ready.go:35] waiting up to 6m0s for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.562383  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:11.573888  127760 node_ready.go:49] node "embed-certs-809120" has status "Ready":"True"
	I1212 23:23:11.573909  127760 node_ready.go:38] duration metric: took 11.694074ms waiting for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.573919  127760 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:11.591310  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:11.634553  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.672164  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.681199  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:11.681232  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:11.910291  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:11.910325  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:11.993110  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:11.993135  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:12.043047  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:13.550517  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.988091372s)
	I1212 23:23:13.550558  127760 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:13.642966  127760 pod_ready.go:102] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:14.387226  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.752630931s)
	I1212 23:23:14.387298  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387315  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387321  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.715126034s)
	I1212 23:23:14.387345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387359  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387641  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387663  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387675  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387690  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387776  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387801  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387811  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387819  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.388233  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388247  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388248  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.388285  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388291  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388345  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.426683  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.426713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.427017  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.427030  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.427038  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.477873  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.434777303s)
	I1212 23:23:14.477930  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.477944  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478303  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478321  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.478333  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.478357  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478607  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478622  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478632  127760 addons.go:467] Verifying addon metrics-server=true in "embed-certs-809120"
	I1212 23:23:14.480500  127760 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:14.481900  127760 addons.go:502] enable addons completed in 3.227656537s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:15.629572  127760 pod_ready.go:92] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.629599  127760 pod_ready.go:81] duration metric: took 4.038262674s waiting for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.629608  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.638502  127760 pod_ready.go:97] error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638532  127760 pod_ready.go:81] duration metric: took 8.918039ms waiting for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	E1212 23:23:15.638547  127760 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638556  127760 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647047  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.647075  127760 pod_ready.go:81] duration metric: took 8.510672ms waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647089  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655068  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.655091  127760 pod_ready.go:81] duration metric: took 7.994932ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655100  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664338  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.664386  127760 pod_ready.go:81] duration metric: took 9.26869ms waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664401  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732454  127760 pod_ready.go:92] pod "kube-proxy-4nb6w" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:16.732480  127760 pod_ready.go:81] duration metric: took 1.068071012s waiting for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732489  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022376  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:17.022402  127760 pod_ready.go:81] duration metric: took 289.906446ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022423  127760 pod_ready.go:38] duration metric: took 5.448491831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:17.022445  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:17.022494  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:17.039594  127760 api_server.go:72] duration metric: took 5.740406855s to wait for apiserver process to appear ...
	I1212 23:23:17.039620  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:17.039637  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:23:17.044745  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:23:17.046494  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:23:17.046521  127760 api_server.go:131] duration metric: took 6.894306ms to wait for apiserver health ...
	I1212 23:23:17.046531  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:17.227869  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:17.227899  127760 system_pods.go:61] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.227904  127760 system_pods.go:61] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.227909  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.227913  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.227916  127760 system_pods.go:61] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.227920  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.227927  127760 system_pods.go:61] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.227933  127760 system_pods.go:61] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.227944  127760 system_pods.go:74] duration metric: took 181.405975ms to wait for pod list to return data ...
	I1212 23:23:17.227962  127760 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:17.423151  127760 default_sa.go:45] found service account: "default"
	I1212 23:23:17.423181  127760 default_sa.go:55] duration metric: took 195.20215ms for default service account to be created ...
	I1212 23:23:17.423190  127760 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:17.627077  127760 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:17.627104  127760 system_pods.go:89] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.627109  127760 system_pods.go:89] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.627114  127760 system_pods.go:89] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.627118  127760 system_pods.go:89] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.627124  127760 system_pods.go:89] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.627128  127760 system_pods.go:89] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.627135  127760 system_pods.go:89] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.627139  127760 system_pods.go:89] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.627147  127760 system_pods.go:126] duration metric: took 203.952951ms to wait for k8s-apps to be running ...
	I1212 23:23:17.627155  127760 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:17.627197  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:17.641949  127760 system_svc.go:56] duration metric: took 14.784378ms WaitForService to wait for kubelet.
	I1212 23:23:17.641979  127760 kubeadm.go:581] duration metric: took 6.342797652s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:17.642005  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:17.823169  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:17.823201  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:17.823214  127760 node_conditions.go:105] duration metric: took 181.202017ms to run NodePressure ...
	I1212 23:23:17.823230  127760 start.go:228] waiting for startup goroutines ...
	I1212 23:23:17.823258  127760 start.go:233] waiting for cluster config update ...
	I1212 23:23:17.823276  127760 start.go:242] writing updated cluster config ...
	I1212 23:23:17.823609  127760 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:17.879192  127760 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:23:17.880946  127760 out.go:177] * Done! kubectl is now configured to use "embed-certs-809120" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:16:32 UTC, ends at Tue 2023-12-12 23:26:58 UTC. --
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.590078057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423618590060784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=7f15d01d-6cc8-48cf-ba4e-d552ab154db0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.590955788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b5c894d-3345-45cf-9965-772ac065f784 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.591040186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b5c894d-3345-45cf-9965-772ac065f784 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.591235682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423064416991608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399cc9a4dae644dadba2b8d00cd6e673a4e57612d395b8119f62c9449f511811,PodSandboxId:f30e5ab7b55b51c48ab261b2eaf01f1f7191a2419449e972757d28bb41095304,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423036491351914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b53af585-9754-4561-8e28-c04e2d0d07d1,},Annotations:map[string]string{io.kubernetes.container.hash: 50ab1e41,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357,PodSandboxId:e886358cd2af18337be0a12e6ac86dad823388bf5687599b1f8cc1f52f531dd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702423034067209002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b6lz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec8ee19-e734-4792-82be-3765afc63a12,},Annotations:map[string]string{io.kubernetes.container.hash: a3c96f2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6,PodSandboxId:bbbff139fe3ae5940e55aced698f384255a83849c7cbbdd37301678312f5eeba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702423034968934884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4698s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3181b9-bbf8-431d-9b2f-45daee2289f1,},Annotations:map[string]string{io.kubernetes.container.hash: 69d8bc5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423033513317283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0
d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee,PodSandboxId:332a20be4b58e5586eec518f9dd23bbec301122f09dbb6e53e0d423bacb11e56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702423025966005276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7542d475f69aeb3071385839efe3697,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2b4a6c8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a,PodSandboxId:1dc475944d91d0f60cde385b4a25cdb9bdf30a68c9ccce2b94e3842da7e212c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702423024522391922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71,PodSandboxId:6ae6ede330e1ac4ed4f8954d6c6d5843d7d36d3bbe23e06bd35f81f813de989d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702423024202694505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1,PodSandboxId:b3165eccb7464295b1609b4032e9b53059fad6262ded0bac7ca357fa948faded,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702423024075772324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058c72f1b1a0f7dc54bd481a33984172,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3c07a316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b5c894d-3345-45cf-9965-772ac065f784 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.632400540Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=50d4619f-25ec-4b33-8aab-12158f494e47 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.632496566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=50d4619f-25ec-4b33-8aab-12158f494e47 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.634162948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=faa8ce28-50be-49c5-b903-ac8465071db0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.634573735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423618634560118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=faa8ce28-50be-49c5-b903-ac8465071db0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.635297421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bcfd5822-9975-4129-b4bc-30aca5655d64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.635372204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bcfd5822-9975-4129-b4bc-30aca5655d64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.635594813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423064416991608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399cc9a4dae644dadba2b8d00cd6e673a4e57612d395b8119f62c9449f511811,PodSandboxId:f30e5ab7b55b51c48ab261b2eaf01f1f7191a2419449e972757d28bb41095304,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423036491351914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b53af585-9754-4561-8e28-c04e2d0d07d1,},Annotations:map[string]string{io.kubernetes.container.hash: 50ab1e41,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357,PodSandboxId:e886358cd2af18337be0a12e6ac86dad823388bf5687599b1f8cc1f52f531dd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702423034067209002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b6lz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec8ee19-e734-4792-82be-3765afc63a12,},Annotations:map[string]string{io.kubernetes.container.hash: a3c96f2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6,PodSandboxId:bbbff139fe3ae5940e55aced698f384255a83849c7cbbdd37301678312f5eeba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702423034968934884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4698s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3181b9-bbf8-431d-9b2f-45daee2289f1,},Annotations:map[string]string{io.kubernetes.container.hash: 69d8bc5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423033513317283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0
d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee,PodSandboxId:332a20be4b58e5586eec518f9dd23bbec301122f09dbb6e53e0d423bacb11e56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702423025966005276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7542d475f69aeb3071385839efe3697,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2b4a6c8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a,PodSandboxId:1dc475944d91d0f60cde385b4a25cdb9bdf30a68c9ccce2b94e3842da7e212c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702423024522391922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71,PodSandboxId:6ae6ede330e1ac4ed4f8954d6c6d5843d7d36d3bbe23e06bd35f81f813de989d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702423024202694505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1,PodSandboxId:b3165eccb7464295b1609b4032e9b53059fad6262ded0bac7ca357fa948faded,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702423024075772324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058c72f1b1a0f7dc54bd481a33984172,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3c07a316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bcfd5822-9975-4129-b4bc-30aca5655d64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.678709056Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ba5ff96d-6d42-44b6-86e4-e66329fda77e name=/runtime.v1.RuntimeService/Version
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.678793333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ba5ff96d-6d42-44b6-86e4-e66329fda77e name=/runtime.v1.RuntimeService/Version
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.680443894Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8b85dcbe-0ae0-4d8e-88a4-f4a8bdc35edb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.681102544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423618681085759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=8b85dcbe-0ae0-4d8e-88a4-f4a8bdc35edb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.682102091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=245993ee-1635-4bd9-ae75-919a5c696db6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.682175898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=245993ee-1635-4bd9-ae75-919a5c696db6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.682388582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423064416991608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399cc9a4dae644dadba2b8d00cd6e673a4e57612d395b8119f62c9449f511811,PodSandboxId:f30e5ab7b55b51c48ab261b2eaf01f1f7191a2419449e972757d28bb41095304,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423036491351914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b53af585-9754-4561-8e28-c04e2d0d07d1,},Annotations:map[string]string{io.kubernetes.container.hash: 50ab1e41,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357,PodSandboxId:e886358cd2af18337be0a12e6ac86dad823388bf5687599b1f8cc1f52f531dd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702423034067209002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b6lz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec8ee19-e734-4792-82be-3765afc63a12,},Annotations:map[string]string{io.kubernetes.container.hash: a3c96f2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6,PodSandboxId:bbbff139fe3ae5940e55aced698f384255a83849c7cbbdd37301678312f5eeba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702423034968934884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4698s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3181b9-bbf8-431d-9b2f-45daee2289f1,},Annotations:map[string]string{io.kubernetes.container.hash: 69d8bc5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423033513317283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0
d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee,PodSandboxId:332a20be4b58e5586eec518f9dd23bbec301122f09dbb6e53e0d423bacb11e56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702423025966005276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7542d475f69aeb3071385839efe3697,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2b4a6c8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a,PodSandboxId:1dc475944d91d0f60cde385b4a25cdb9bdf30a68c9ccce2b94e3842da7e212c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702423024522391922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71,PodSandboxId:6ae6ede330e1ac4ed4f8954d6c6d5843d7d36d3bbe23e06bd35f81f813de989d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702423024202694505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1,PodSandboxId:b3165eccb7464295b1609b4032e9b53059fad6262ded0bac7ca357fa948faded,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702423024075772324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058c72f1b1a0f7dc54bd481a33984172,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3c07a316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=245993ee-1635-4bd9-ae75-919a5c696db6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.726123178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=350b288c-4592-4145-b5e3-693d4fd9afd6 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.726183224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=350b288c-4592-4145-b5e3-693d4fd9afd6 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.727479141Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=161cbff4-905a-448b-b378-be14a4390c02 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.727974301Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423618727958533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=161cbff4-905a-448b-b378-be14a4390c02 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.728495505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=677d7b54-4e85-419b-b752-3267a6ba2b21 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.728599598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=677d7b54-4e85-419b-b752-3267a6ba2b21 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:26:58 old-k8s-version-549640 crio[707]: time="2023-12-12 23:26:58.728831729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423064416991608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399cc9a4dae644dadba2b8d00cd6e673a4e57612d395b8119f62c9449f511811,PodSandboxId:f30e5ab7b55b51c48ab261b2eaf01f1f7191a2419449e972757d28bb41095304,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423036491351914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b53af585-9754-4561-8e28-c04e2d0d07d1,},Annotations:map[string]string{io.kubernetes.container.hash: 50ab1e41,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357,PodSandboxId:e886358cd2af18337be0a12e6ac86dad823388bf5687599b1f8cc1f52f531dd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702423034067209002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b6lz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec8ee19-e734-4792-82be-3765afc63a12,},Annotations:map[string]string{io.kubernetes.container.hash: a3c96f2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6,PodSandboxId:bbbff139fe3ae5940e55aced698f384255a83849c7cbbdd37301678312f5eeba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702423034968934884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4698s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3181b9-bbf8-431d-9b2f-45daee2289f1,},Annotations:map[string]string{io.kubernetes.container.hash: 69d8bc5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423033513317283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0
d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee,PodSandboxId:332a20be4b58e5586eec518f9dd23bbec301122f09dbb6e53e0d423bacb11e56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702423025966005276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7542d475f69aeb3071385839efe3697,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2b4a6c8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a,PodSandboxId:1dc475944d91d0f60cde385b4a25cdb9bdf30a68c9ccce2b94e3842da7e212c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702423024522391922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71,PodSandboxId:6ae6ede330e1ac4ed4f8954d6c6d5843d7d36d3bbe23e06bd35f81f813de989d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702423024202694505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1,PodSandboxId:b3165eccb7464295b1609b4032e9b53059fad6262ded0bac7ca357fa948faded,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702423024075772324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058c72f1b1a0f7dc54bd481a33984172,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3c07a316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=677d7b54-4e85-419b-b752-3267a6ba2b21 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0372df844f76c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Running             storage-provisioner       1                   a82e04d6d7390       storage-provisioner
	399cc9a4dae64       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                   0                   f30e5ab7b55b5       busybox
	9bfcd578e7bf5       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      9 minutes ago       Running             coredns                   0                   bbbff139fe3ae       coredns-5644d7b6d9-4698s
	724f33e972a14       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      9 minutes ago       Running             kube-proxy                0                   e886358cd2af1       kube-proxy-b6lz6
	f0f94bb587a89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   a82e04d6d7390       storage-provisioner
	89884a774b5b4       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      9 minutes ago       Running             etcd                      0                   332a20be4b58e       etcd-old-k8s-version-549640
	8800e89e7fd31       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      9 minutes ago       Running             kube-scheduler            0                   1dc475944d91d       kube-scheduler-old-k8s-version-549640
	606ff8dc40025       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      9 minutes ago       Running             kube-controller-manager   0                   6ae6ede330e1a       kube-controller-manager-old-k8s-version-549640
	98d3d9c460328       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      9 minutes ago       Running             kube-apiserver            0                   b3165eccb7464       kube-apiserver-old-k8s-version-549640
	
	* 
	* ==> coredns [9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6] <==
	* 2023-12-12T23:17:20.481Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-12T23:17:20.494Z [INFO] 127.0.0.1:43035 - 7884 "HINFO IN 7347994220414496808.5971008044067915772. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013683582s
	2023-12-12T23:17:25.564Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-12-12T23:17:35.564Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	I1212 23:17:45.483711       1 trace.go:82] Trace[573070858]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-12-12 23:17:15.479181525 +0000 UTC m=+0.187896749) (total time: 30.00442104s):
	Trace[573070858]: [30.00442104s] [30.00442104s] END
	E1212 23:17:45.483814       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.483814       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.483814       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1212 23:17:45.483999       1 trace.go:82] Trace[1175423538]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-12-12 23:17:15.478986785 +0000 UTC m=+0.187701963) (total time: 30.00498428s):
	Trace[1175423538]: [30.00498428s] [30.00498428s] END
	E1212 23:17:45.484051       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.484051       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.484051       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1212 23:17:45.486408       1 trace.go:82] Trace[1090959793]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-12-12 23:17:15.485822041 +0000 UTC m=+0.194537223) (total time: 30.000565845s):
	Trace[1090959793]: [30.000565845s] [30.000565845s] END
	E1212 23:17:45.486459       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.486459       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.486459       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	2023-12-12T23:17:45.564Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	E1212 23:17:45.483814       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.484051       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.486459       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-549640
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-549640
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=old-k8s-version-549640
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_07_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:07:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:26:41 +0000   Tue, 12 Dec 2023 23:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:26:41 +0000   Tue, 12 Dec 2023 23:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:26:41 +0000   Tue, 12 Dec 2023 23:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:26:41 +0000   Tue, 12 Dec 2023 23:17:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.146
	  Hostname:    old-k8s-version-549640
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 023b94e4f5064d3b92a8dfc25385dd75
	 System UUID:                023b94e4-f506-4d3b-92a8-dfc25385dd75
	 Boot ID:                    52e5aea4-3448-460f-97f6-e727db27da5a
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                coredns-5644d7b6d9-4698s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                etcd-old-k8s-version-549640                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-apiserver-old-k8s-version-549640             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-controller-manager-old-k8s-version-549640    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m48s
	  kube-system                kube-proxy-b6lz6                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-scheduler-old-k8s-version-549640             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                metrics-server-74d5856cc6-hsjtz                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m31s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                    kube-proxy, old-k8s-version-549640  Starting kube-proxy.
	  Normal  Starting                 9m56s                  kubelet, old-k8s-version-549640     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m56s (x8 over 9m56s)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s (x8 over 9m56s)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s (x7 over 9m56s)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet, old-k8s-version-549640     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m44s                  kube-proxy, old-k8s-version-549640  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec12 23:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.354716] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.550543] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148326] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.463375] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.225715] systemd-fstab-generator[631]: Ignoring "noauto" for root device
	[  +0.095794] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.137505] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.116913] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.240463] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[Dec12 23:17] systemd-fstab-generator[1018]: Ignoring "noauto" for root device
	[  +0.488824] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.918632] kauditd_printk_skb: 13 callbacks suppressed
	[ +24.902231] hrtimer: interrupt took 8290587 ns
	
	* 
	* ==> etcd [89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee] <==
	* 2023-12-12 23:17:06.412699 W | auth: simple token is not cryptographically signed
	2023-12-12 23:17:06.415498 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-12 23:17:06.417197 I | etcdserver/membership: added member 52a637c8f882c7df [https://192.168.61.146:2380] to cluster a63b81a8045c22a0
	2023-12-12 23:17:06.417290 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-12 23:17:06.417398 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-12 23:17:06.417979 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 23:17:06.418157 I | embed: listening for metrics on http://192.168.61.146:2381
	2023-12-12 23:17:06.418521 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-12 23:17:07.409700 I | raft: 52a637c8f882c7df is starting a new election at term 2
	2023-12-12 23:17:07.409976 I | raft: 52a637c8f882c7df became candidate at term 3
	2023-12-12 23:17:07.410025 I | raft: 52a637c8f882c7df received MsgVoteResp from 52a637c8f882c7df at term 3
	2023-12-12 23:17:07.410055 I | raft: 52a637c8f882c7df became leader at term 3
	2023-12-12 23:17:07.410078 I | raft: raft.node: 52a637c8f882c7df elected leader 52a637c8f882c7df at term 3
	2023-12-12 23:17:07.410408 I | etcdserver: published {Name:old-k8s-version-549640 ClientURLs:[https://192.168.61.146:2379]} to cluster a63b81a8045c22a0
	2023-12-12 23:17:07.411048 I | embed: ready to serve client requests
	2023-12-12 23:17:07.411483 I | embed: ready to serve client requests
	2023-12-12 23:17:07.412443 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 23:17:07.416588 I | embed: serving client requests on 192.168.61.146:2379
	2023-12-12 23:17:12.117229 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (118.026013ms) to execute
	2023-12-12 23:17:12.117556 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:deployment-controller\" " with result "range_response_count:1 size:495" took too long (178.987791ms) to execute
	2023-12-12 23:17:14.979219 W | etcdserver: request "header:<ID:14402384478672134807 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-549640.17a038b880c7812c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-549640.17a038b880c7812c\" value_size:402 lease:5179012441817358724 >> failure:<>>" with result "size:16" took too long (132.019764ms) to execute
	2023-12-12 23:17:15.426388 W | etcdserver: request "header:<ID:14402384478672134823 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-549640.17a038b88c60dc49\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-549640.17a038b88c60dc49\" value_size:439 lease:5179012441817358724 >> failure:<>>" with result "size:16" took too long (149.567871ms) to execute
	2023-12-12 23:17:15.437610 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:263" took too long (165.136508ms) to execute
	2023-12-12 23:17:15.446130 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:1 size:3065" took too long (173.422947ms) to execute
	2023-12-12 23:17:15.447257 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" " with result "range_response_count:1 size:928" took too long (170.171448ms) to execute
	
	* 
	* ==> kernel <==
	*  23:26:59 up 10 min,  0 users,  load average: 0.20, 0.22, 0.16
	Linux old-k8s-version-549640 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1] <==
	* I1212 23:18:12.752540       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:18:12.752755       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:18:12.752840       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:18:12.752951       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:20:12.753429       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:20:12.753924       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:20:12.754039       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:20:12.754090       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:22:11.897568       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:22:11.897966       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:22:11.898061       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:22:11.898085       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:23:11.898415       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:23:11.898615       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:23:11.898681       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:23:11.898704       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:25:11.899236       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:25:11.899343       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:25:11.899400       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:25:11.899407       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71] <==
	* E1212 23:20:31.353974       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:20:40.635085       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:21:01.606747       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:21:12.637629       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:21:31.858837       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:21:44.640095       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:22:02.111349       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:22:16.643119       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:22:32.364104       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:22:48.647141       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:23:02.616219       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:23:20.649172       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:23:32.868380       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:23:52.652712       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:24:03.121484       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:24:24.654827       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:24:33.373468       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:24:56.657475       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:25:03.625738       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:25:28.659566       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:25:33.878356       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:26:00.662143       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:26:04.130435       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:26:32.664212       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:26:34.382532       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357] <==
	* W1212 23:08:10.651579       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1212 23:08:10.659959       1 node.go:135] Successfully retrieved node IP: 192.168.61.146
	I1212 23:08:10.660042       1 server_others.go:149] Using iptables Proxier.
	I1212 23:08:10.660735       1 server.go:529] Version: v1.16.0
	I1212 23:08:10.666989       1 config.go:313] Starting service config controller
	I1212 23:08:10.667083       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1212 23:08:10.669589       1 config.go:131] Starting endpoints config controller
	I1212 23:08:10.672166       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1212 23:08:10.767692       1 shared_informer.go:204] Caches are synced for service config 
	I1212 23:08:10.772828       1 shared_informer.go:204] Caches are synced for endpoints config 
	E1212 23:09:20.956706       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=485&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp 192.168.61.146:8443: connect: connection refused
	E1212 23:09:20.957215       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=487&timeout=8m45s&timeoutSeconds=525&watch=true: dial tcp 192.168.61.146:8443: connect: connection refused
	W1212 23:17:15.701239       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1212 23:17:15.715431       1 node.go:135] Successfully retrieved node IP: 192.168.61.146
	I1212 23:17:15.715513       1 server_others.go:149] Using iptables Proxier.
	I1212 23:17:15.716205       1 server.go:529] Version: v1.16.0
	I1212 23:17:15.719063       1 config.go:313] Starting service config controller
	I1212 23:17:15.724997       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1212 23:17:15.719451       1 config.go:131] Starting endpoints config controller
	I1212 23:17:15.725237       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1212 23:17:15.825497       1 shared_informer.go:204] Caches are synced for service config 
	I1212 23:17:15.825952       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a] <==
	* E1212 23:07:46.838814       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:07:46.840721       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:07:46.840844       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:07:47.834731       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:07:47.835452       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:07:47.842398       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:07:47.845176       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:07:47.846311       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:07:47.847011       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:07:47.848112       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:07:47.851500       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:07:47.851725       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:07:47.851995       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:07:47.853132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1212 23:17:05.834940       1 serving.go:319] Generated self-signed cert in-memory
	W1212 23:17:10.907694       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 23:17:10.910369       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:17:10.910752       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 23:17:10.911508       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 23:17:10.961926       1 server.go:143] Version: v1.16.0
	I1212 23:17:10.962073       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W1212 23:17:10.972790       1 authorization.go:47] Authorization is disabled
	W1212 23:17:10.972975       1 authentication.go:79] Authentication is disabled
	I1212 23:17:10.973020       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1212 23:17:10.973683       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:16:32 UTC, ends at Tue 2023-12-12 23:26:59 UTC. --
	Dec 12 23:22:35 old-k8s-version-549640 kubelet[1024]: E1212 23:22:35.147204    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:22:47 old-k8s-version-549640 kubelet[1024]: E1212 23:22:47.147634    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:22:58 old-k8s-version-549640 kubelet[1024]: E1212 23:22:58.147733    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:23:10 old-k8s-version-549640 kubelet[1024]: E1212 23:23:10.165995    1024 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 23:23:10 old-k8s-version-549640 kubelet[1024]: E1212 23:23:10.166098    1024 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 23:23:10 old-k8s-version-549640 kubelet[1024]: E1212 23:23:10.166166    1024 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 23:23:10 old-k8s-version-549640 kubelet[1024]: E1212 23:23:10.166207    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 12 23:23:21 old-k8s-version-549640 kubelet[1024]: E1212 23:23:21.147673    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:23:33 old-k8s-version-549640 kubelet[1024]: E1212 23:23:33.148998    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:23:44 old-k8s-version-549640 kubelet[1024]: E1212 23:23:44.147485    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:23:55 old-k8s-version-549640 kubelet[1024]: E1212 23:23:55.147492    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:24:10 old-k8s-version-549640 kubelet[1024]: E1212 23:24:10.146968    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:24:23 old-k8s-version-549640 kubelet[1024]: E1212 23:24:23.147459    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:24:36 old-k8s-version-549640 kubelet[1024]: E1212 23:24:36.147324    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:24:47 old-k8s-version-549640 kubelet[1024]: E1212 23:24:47.148106    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:25:01 old-k8s-version-549640 kubelet[1024]: E1212 23:25:01.147054    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:25:14 old-k8s-version-549640 kubelet[1024]: E1212 23:25:14.147734    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:25:25 old-k8s-version-549640 kubelet[1024]: E1212 23:25:25.147556    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:25:39 old-k8s-version-549640 kubelet[1024]: E1212 23:25:39.147304    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:25:50 old-k8s-version-549640 kubelet[1024]: E1212 23:25:50.147064    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:26:01 old-k8s-version-549640 kubelet[1024]: E1212 23:26:01.147179    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:26:13 old-k8s-version-549640 kubelet[1024]: E1212 23:26:13.147515    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:26:24 old-k8s-version-549640 kubelet[1024]: E1212 23:26:24.147113    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:26:38 old-k8s-version-549640 kubelet[1024]: E1212 23:26:38.147102    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:26:49 old-k8s-version-549640 kubelet[1024]: E1212 23:26:49.148029    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015] <==
	* I1212 23:17:44.585228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:17:44.599790       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:17:44.600063       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:18:02.007530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:18:02.008133       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-549640_661390ac-3cb6-4b5c-8b0b-831df338c898!
	I1212 23:18:02.008795       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f830995-bb53-44bd-84b0-2e2877ca6bf5", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-549640_661390ac-3cb6-4b5c-8b0b-831df338c898 became leader
	I1212 23:18:02.109469       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-549640_661390ac-3cb6-4b5c-8b0b-831df338c898!
	
	* 
	* ==> storage-provisioner [f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e] <==
	* I1212 23:08:11.450819       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:08:11.464864       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:08:11.466359       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:08:11.478012       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:08:11.480415       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-549640_16de55e8-696f-4ceb-877e-452df6ce63d8!
	I1212 23:08:11.478350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f830995-bb53-44bd-84b0-2e2877ca6bf5", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-549640_16de55e8-696f-4ceb-877e-452df6ce63d8 became leader
	I1212 23:08:11.581901       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-549640_16de55e8-696f-4ceb-877e-452df6ce63d8!
	I1212 23:17:13.817767       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 23:17:43.826623       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-549640 -n old-k8s-version-549640
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-549640 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-hsjtz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-549640 describe pod metrics-server-74d5856cc6-hsjtz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-549640 describe pod metrics-server-74d5856cc6-hsjtz: exit status 1 (68.380906ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-hsjtz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-549640 describe pod metrics-server-74d5856cc6-hsjtz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-12 23:31:20.446933349 +0000 UTC m=+5320.257990797
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-850839 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-850839 logs -n 25: (1.658751034s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-828988 sudo cat                              | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo find                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo crio                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-828988                                       | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-685244 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | disable-driver-mounts-685244                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:09 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-809120            | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-549640        | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-115023             | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850839  | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-809120                 | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-549640             | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-115023                  | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850839       | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:22 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:12:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:12:31.006246  128282 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:12:31.006380  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006389  128282 out.go:309] Setting ErrFile to fd 2...
	I1212 23:12:31.006393  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006549  128282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:12:31.007106  128282 out.go:303] Setting JSON to false
	I1212 23:12:31.008035  128282 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14105,"bootTime":1702408646,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:12:31.008097  128282 start.go:138] virtualization: kvm guest
	I1212 23:12:31.010317  128282 out.go:177] * [default-k8s-diff-port-850839] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:12:31.011782  128282 notify.go:220] Checking for updates...
	I1212 23:12:31.011787  128282 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:12:31.013177  128282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:12:31.014626  128282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:12:31.016153  128282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:12:31.017420  128282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:12:31.018789  128282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:12:31.020548  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:12:31.021022  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.021073  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.036337  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I1212 23:12:31.036724  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.037285  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.037315  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.037677  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.037910  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.038190  128282 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:12:31.038482  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.038521  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.052455  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
	I1212 23:12:31.052897  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.053408  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.053428  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.053842  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.054041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.090916  128282 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:12:31.092159  128282 start.go:298] selected driver: kvm2
	I1212 23:12:31.092174  128282 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.092313  128282 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:12:31.092991  128282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.093081  128282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:12:31.108612  128282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:12:31.108979  128282 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:12:31.109050  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:12:31.109064  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:12:31.109078  128282 start_flags.go:323] config:
	{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85083
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.109261  128282 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.110991  128282 out.go:177] * Starting control plane node default-k8s-diff-port-850839 in cluster default-k8s-diff-port-850839
	I1212 23:12:28.611488  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:31.112184  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:12:31.112223  128282 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:12:31.112231  128282 cache.go:56] Caching tarball of preloaded images
	I1212 23:12:31.112315  128282 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:12:31.112331  128282 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:12:31.112435  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:12:31.112621  128282 start.go:365] acquiring machines lock for default-k8s-diff-port-850839: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:12:34.691505  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:37.763538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:43.843515  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:46.915553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:52.995487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:56.067468  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:02.147575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:05.219586  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:11.299553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:14.371547  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:20.451538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:23.523565  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:29.603544  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:32.675516  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:38.755580  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:41.827595  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:47.907601  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:50.979707  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:57.059532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:00.131511  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:06.211489  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:09.283534  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:15.363535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:18.435583  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:24.515478  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:27.587546  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:33.667567  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:36.739532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:42.819531  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:45.891616  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:51.971509  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:55.043560  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:01.123510  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:04.195575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:10.275535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:13.347520  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:19.427542  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:22.499524  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:28.579575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:31.651552  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:37.731535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:40.803533  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:46.883561  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:49.955571  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:56.035557  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:59.107536  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:05.187487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:08.259527  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:14.339497  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:17.411598  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:20.416121  127900 start.go:369] acquired machines lock for "old-k8s-version-549640" in 4m27.702597236s
	I1212 23:16:20.416185  127900 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:20.416197  127900 fix.go:54] fixHost starting: 
	I1212 23:16:20.416598  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:20.416638  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:20.431626  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I1212 23:16:20.432088  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:20.432550  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:16:20.432573  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:20.432976  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:20.433174  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:20.433352  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:16:20.435450  127900 fix.go:102] recreateIfNeeded on old-k8s-version-549640: state=Stopped err=<nil>
	I1212 23:16:20.435477  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	W1212 23:16:20.435650  127900 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:20.437467  127900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-549640" ...
	I1212 23:16:20.438890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Start
	I1212 23:16:20.439060  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring networks are active...
	I1212 23:16:20.439992  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network default is active
	I1212 23:16:20.440387  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network mk-old-k8s-version-549640 is active
	I1212 23:16:20.440738  127900 main.go:141] libmachine: (old-k8s-version-549640) Getting domain xml...
	I1212 23:16:20.441435  127900 main.go:141] libmachine: (old-k8s-version-549640) Creating domain...
	I1212 23:16:21.692826  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting to get IP...
	I1212 23:16:21.693784  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.694269  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.694313  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.694229  128878 retry.go:31] will retry after 250.302126ms: waiting for machine to come up
	I1212 23:16:21.945651  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.946122  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.946145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.946067  128878 retry.go:31] will retry after 271.460868ms: waiting for machine to come up
	I1212 23:16:22.219848  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.220326  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.220352  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.220248  128878 retry.go:31] will retry after 466.723624ms: waiting for machine to come up
	I1212 23:16:20.413611  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:20.413648  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:16:20.415967  127760 machine.go:91] provisioned docker machine in 4m37.407647774s
	I1212 23:16:20.416013  127760 fix.go:56] fixHost completed within 4m37.429684827s
	I1212 23:16:20.416025  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 4m37.429713708s
	W1212 23:16:20.416055  127760 start.go:694] error starting host: provision: host is not running
	W1212 23:16:20.416230  127760 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 23:16:20.416241  127760 start.go:709] Will try again in 5 seconds ...
	I1212 23:16:22.689020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.689524  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.689559  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.689474  128878 retry.go:31] will retry after 384.986526ms: waiting for machine to come up
	I1212 23:16:23.076020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.076428  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.076462  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.076365  128878 retry.go:31] will retry after 673.784203ms: waiting for machine to come up
	I1212 23:16:23.752374  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.752825  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.752859  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.752777  128878 retry.go:31] will retry after 744.371791ms: waiting for machine to come up
	I1212 23:16:24.498624  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:24.499057  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:24.499088  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:24.498994  128878 retry.go:31] will retry after 1.095766265s: waiting for machine to come up
	I1212 23:16:25.596742  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:25.597192  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:25.597217  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:25.597133  128878 retry.go:31] will retry after 1.340596782s: waiting for machine to come up
	I1212 23:16:26.939593  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:26.939933  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:26.939957  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:26.939881  128878 retry.go:31] will retry after 1.546075974s: waiting for machine to come up
	I1212 23:16:25.417922  127760 start.go:365] acquiring machines lock for embed-certs-809120: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:16:28.488543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:28.488923  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:28.488949  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:28.488883  128878 retry.go:31] will retry after 2.06517547s: waiting for machine to come up
	I1212 23:16:30.555809  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:30.556300  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:30.556330  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:30.556262  128878 retry.go:31] will retry after 2.237409729s: waiting for machine to come up
	I1212 23:16:32.796273  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:32.796684  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:32.796712  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:32.796629  128878 retry.go:31] will retry after 3.535954383s: waiting for machine to come up
	I1212 23:16:36.333758  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:36.334211  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:36.334243  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:36.334143  128878 retry.go:31] will retry after 3.820382113s: waiting for machine to come up
	I1212 23:16:41.367963  128156 start.go:369] acquired machines lock for "no-preload-115023" in 4m21.778030837s
	I1212 23:16:41.368034  128156 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:41.368046  128156 fix.go:54] fixHost starting: 
	I1212 23:16:41.368459  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:41.368498  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:41.384557  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1212 23:16:41.385004  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:41.385448  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:16:41.385471  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:41.385799  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:41.386007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:16:41.386192  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:16:41.387807  128156 fix.go:102] recreateIfNeeded on no-preload-115023: state=Stopped err=<nil>
	I1212 23:16:41.387858  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	W1212 23:16:41.388030  128156 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:41.390189  128156 out.go:177] * Restarting existing kvm2 VM for "no-preload-115023" ...
	I1212 23:16:40.159111  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159503  127900 main.go:141] libmachine: (old-k8s-version-549640) Found IP for machine: 192.168.61.146
	I1212 23:16:40.159530  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserving static IP address...
	I1212 23:16:40.159543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has current primary IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159970  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.160042  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | skip adding static IP to network mk-old-k8s-version-549640 - found existing host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"}
	I1212 23:16:40.160060  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserved static IP address: 192.168.61.146
	I1212 23:16:40.160072  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for SSH to be available...
	I1212 23:16:40.160087  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Getting to WaitForSSH function...
	I1212 23:16:40.162048  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162377  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.162417  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162498  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH client type: external
	I1212 23:16:40.162571  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa (-rw-------)
	I1212 23:16:40.162609  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:16:40.162629  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | About to run SSH command:
	I1212 23:16:40.162644  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | exit 0
	I1212 23:16:40.254804  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | SSH cmd err, output: <nil>: 
	I1212 23:16:40.255235  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetConfigRaw
	I1212 23:16:40.255885  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.258196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258526  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.258551  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258806  127900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/config.json ...
	I1212 23:16:40.259036  127900 machine.go:88] provisioning docker machine ...
	I1212 23:16:40.259059  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:40.259292  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259454  127900 buildroot.go:166] provisioning hostname "old-k8s-version-549640"
	I1212 23:16:40.259475  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259624  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.261311  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261561  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.261583  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261686  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.261818  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.261974  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.262114  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.262270  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.262645  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.262666  127900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-549640 && echo "old-k8s-version-549640" | sudo tee /etc/hostname
	I1212 23:16:40.395342  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-549640
	
	I1212 23:16:40.395376  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.398008  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398391  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.398430  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398533  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.398716  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.398890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.399024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.399152  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.399489  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.399510  127900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-549640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-549640/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-549640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:40.526781  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:40.526824  127900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:16:40.526847  127900 buildroot.go:174] setting up certificates
	I1212 23:16:40.526859  127900 provision.go:83] configureAuth start
	I1212 23:16:40.526877  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.527276  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.530483  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.530876  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.530908  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.531162  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.533161  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533456  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.533488  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533567  127900 provision.go:138] copyHostCerts
	I1212 23:16:40.533625  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:16:40.533645  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:16:40.533711  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:16:40.533799  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:16:40.533806  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:16:40.533829  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:16:40.533882  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:16:40.533889  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:16:40.533913  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:16:40.533957  127900 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-549640 san=[192.168.61.146 192.168.61.146 localhost 127.0.0.1 minikube old-k8s-version-549640]
	I1212 23:16:40.630542  127900 provision.go:172] copyRemoteCerts
	I1212 23:16:40.630611  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:40.630639  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.633145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633408  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.633433  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633579  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.633790  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.633944  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.634162  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:40.725498  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 23:16:40.748097  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:16:40.769852  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:16:40.791381  127900 provision.go:86] duration metric: configureAuth took 264.501961ms
	I1212 23:16:40.791417  127900 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:40.791602  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:16:40.791678  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.794113  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794479  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.794514  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794653  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.794864  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795055  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795234  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.795443  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.795777  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.795807  127900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:16:41.103469  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:16:41.103503  127900 machine.go:91] provisioned docker machine in 844.450063ms
	I1212 23:16:41.103517  127900 start.go:300] post-start starting for "old-k8s-version-549640" (driver="kvm2")
	I1212 23:16:41.103527  127900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:41.103547  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.103894  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:41.103923  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.106459  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.106835  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.106864  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.107013  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.107190  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.107363  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.107532  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.201177  127900 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:41.205686  127900 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:41.205711  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:16:41.205773  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:16:41.205862  127900 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:16:41.205970  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:41.214591  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:41.240854  127900 start.go:303] post-start completed in 137.32025ms
	I1212 23:16:41.240885  127900 fix.go:56] fixHost completed within 20.824687398s
	I1212 23:16:41.240915  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.243633  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244071  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.244104  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244300  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.244517  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244651  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244806  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.244981  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:41.245337  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:41.245350  127900 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:16:41.367815  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423001.317394085
	
	I1212 23:16:41.367837  127900 fix.go:206] guest clock: 1702423001.317394085
	I1212 23:16:41.367844  127900 fix.go:219] Guest: 2023-12-12 23:16:41.317394085 +0000 UTC Remote: 2023-12-12 23:16:41.240889292 +0000 UTC m=+288.685284781 (delta=76.504793ms)
	I1212 23:16:41.367863  127900 fix.go:190] guest clock delta is within tolerance: 76.504793ms
	I1212 23:16:41.367868  127900 start.go:83] releasing machines lock for "old-k8s-version-549640", held for 20.951706122s
	I1212 23:16:41.367895  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.368219  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:41.370769  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371172  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.371196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371378  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.371904  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372069  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372157  127900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:16:41.372206  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.372409  127900 ssh_runner.go:195] Run: cat /version.json
	I1212 23:16:41.372438  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.374847  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.374869  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375341  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375373  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375401  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375419  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375526  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375661  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375749  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.375835  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.376026  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376031  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.488636  127900 ssh_runner.go:195] Run: systemctl --version
	I1212 23:16:41.494315  127900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:16:41.645474  127900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:16:41.652912  127900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:16:41.652988  127900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:16:41.667662  127900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:16:41.667680  127900 start.go:475] detecting cgroup driver to use...
	I1212 23:16:41.667747  127900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:16:41.681625  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:16:41.693475  127900 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:16:41.693540  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:16:41.705743  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:16:41.719152  127900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:16:41.819641  127900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:16:41.929543  127900 docker.go:219] disabling docker service ...
	I1212 23:16:41.929617  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:16:41.943407  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:16:41.955372  127900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:16:42.063078  127900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:16:42.177422  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:16:42.192994  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:16:42.211887  127900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 23:16:42.211943  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.223418  127900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:16:42.223486  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.234905  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.245973  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.261016  127900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:16:42.272819  127900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:16:42.283308  127900 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:16:42.283381  127900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:16:42.296365  127900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:16:42.307038  127900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:16:42.412672  127900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:16:42.590363  127900 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:16:42.590470  127900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:16:42.596285  127900 start.go:543] Will wait 60s for crictl version
	I1212 23:16:42.596360  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:42.600633  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:16:42.638709  127900 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:16:42.638811  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.694435  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.750327  127900 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 23:16:41.391501  128156 main.go:141] libmachine: (no-preload-115023) Calling .Start
	I1212 23:16:41.391671  128156 main.go:141] libmachine: (no-preload-115023) Ensuring networks are active...
	I1212 23:16:41.392314  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network default is active
	I1212 23:16:41.392624  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network mk-no-preload-115023 is active
	I1212 23:16:41.393075  128156 main.go:141] libmachine: (no-preload-115023) Getting domain xml...
	I1212 23:16:41.393720  128156 main.go:141] libmachine: (no-preload-115023) Creating domain...
	I1212 23:16:42.669200  128156 main.go:141] libmachine: (no-preload-115023) Waiting to get IP...
	I1212 23:16:42.670068  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.670482  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.670582  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.670462  128998 retry.go:31] will retry after 201.350715ms: waiting for machine to come up
	I1212 23:16:42.874061  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.874543  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.874576  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.874492  128998 retry.go:31] will retry after 331.205906ms: waiting for machine to come up
	I1212 23:16:43.207045  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.207590  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.207618  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.207533  128998 retry.go:31] will retry after 343.139691ms: waiting for machine to come up
	I1212 23:16:43.552253  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.552737  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.552769  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.552683  128998 retry.go:31] will retry after 606.192126ms: waiting for machine to come up
	I1212 23:16:44.160409  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.160877  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.160923  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.160842  128998 retry.go:31] will retry after 713.164162ms: waiting for machine to come up
	I1212 23:16:42.751897  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:42.754490  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.754832  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:42.754867  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.755047  127900 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 23:16:42.759290  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:42.770851  127900 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 23:16:42.770913  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:42.822484  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:42.822559  127900 ssh_runner.go:195] Run: which lz4
	I1212 23:16:42.826907  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:16:42.831601  127900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:16:42.831633  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 23:16:44.643588  127900 crio.go:444] Took 1.816704 seconds to copy over tarball
	I1212 23:16:44.643671  127900 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:16:47.603870  127900 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960150759s)
	I1212 23:16:47.603904  127900 crio.go:451] Took 2.960288 seconds to extract the tarball
	I1212 23:16:47.603918  127900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:16:44.875548  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.875971  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.876003  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.875908  128998 retry.go:31] will retry after 928.762857ms: waiting for machine to come up
	I1212 23:16:45.806556  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:45.806983  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:45.807019  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:45.806932  128998 retry.go:31] will retry after 945.322601ms: waiting for machine to come up
	I1212 23:16:46.754374  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:46.754834  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:46.754869  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:46.754818  128998 retry.go:31] will retry after 1.373584303s: waiting for machine to come up
	I1212 23:16:48.130434  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:48.130917  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:48.130950  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:48.130870  128998 retry.go:31] will retry after 1.683447661s: waiting for machine to come up
	I1212 23:16:47.644193  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:47.696129  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:47.696156  127900 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.696314  127900 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.696273  127900 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.696242  127900 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.696306  127900 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.696371  127900 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.696445  127900 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 23:16:47.697649  127900 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.697713  127900 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.697816  127900 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.697955  127900 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 23:16:47.698013  127900 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.698109  127900 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.698124  127900 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.698341  127900 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.888397  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.897712  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.897790  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.910016  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 23:16:47.911074  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.912891  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.923071  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.995042  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:48.022161  127900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 23:16:48.022215  127900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.022270  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053132  127900 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 23:16:48.053181  127900 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.053236  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053493  127900 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 23:16:48.053531  127900 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.053588  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.123888  127900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 23:16:48.123949  127900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.123889  127900 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 23:16:48.124009  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124022  127900 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 23:16:48.124077  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124089  127900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 23:16:48.124111  127900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 23:16:48.124141  127900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.124171  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124115  127900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.124249  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.205456  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.205488  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.205609  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.205650  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.205702  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 23:16:48.205789  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.205814  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.351665  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 23:16:48.351700  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 23:16:48.360026  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 23:16:48.363255  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 23:16:48.363297  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 23:16:48.363376  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 23:16:48.363413  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 23:16:48.363525  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369271  127900 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 23:16:48.369289  127900 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369326  127900 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 23:16:50.628595  127900 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.259242667s)
	I1212 23:16:50.628629  127900 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 23:16:50.628679  127900 cache_images.go:92] LoadImages completed in 2.932510127s
	W1212 23:16:50.628774  127900 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 23:16:50.628871  127900 ssh_runner.go:195] Run: crio config
	I1212 23:16:50.696623  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:16:50.696645  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:16:50.696665  127900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:16:50.696690  127900 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.146 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-549640 NodeName:old-k8s-version-549640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 23:16:50.696857  127900 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-549640"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-549640
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.146:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:16:50.696950  127900 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-549640 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:16:50.697013  127900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 23:16:50.706222  127900 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:16:50.706309  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:16:50.714679  127900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 23:16:50.732119  127900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:16:50.749596  127900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 23:16:50.766445  127900 ssh_runner.go:195] Run: grep 192.168.61.146	control-plane.minikube.internal$ /etc/hosts
	I1212 23:16:50.770611  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:50.783162  127900 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640 for IP: 192.168.61.146
	I1212 23:16:50.783205  127900 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:16:50.783434  127900 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:16:50.783504  127900 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:16:50.783623  127900 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.key
	I1212 23:16:50.783701  127900 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key.a124ebb4
	I1212 23:16:50.783781  127900 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key
	I1212 23:16:50.784002  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:16:50.784053  127900 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:16:50.784070  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:16:50.784118  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:16:50.784162  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:16:50.784201  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:16:50.784260  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:50.785202  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:16:50.813072  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:16:50.838714  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:16:50.863302  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:16:50.891365  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:16:50.916623  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:16:50.946894  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:16:50.974859  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:16:51.002629  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:16:51.027782  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:16:51.052384  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:16:51.077430  127900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:16:51.094703  127900 ssh_runner.go:195] Run: openssl version
	I1212 23:16:51.100625  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:16:51.111038  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116246  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116342  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.122069  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:16:51.132325  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:16:51.142392  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147278  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147353  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.153446  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:16:51.163491  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:16:51.173393  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178482  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178560  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.184710  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:16:51.194819  127900 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:16:51.199808  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:16:51.206208  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:16:51.212498  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:16:51.218555  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:16:51.224923  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:16:51.231298  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:16:51.237570  127900 kubeadm.go:404] StartCluster: {Name:old-k8s-version-549640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:16:51.237672  127900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:16:51.237752  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:16:51.283890  127900 cri.go:89] found id: ""
	I1212 23:16:51.283985  127900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:16:51.296861  127900 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:16:51.296897  127900 kubeadm.go:636] restartCluster start
	I1212 23:16:51.296990  127900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:16:51.306034  127900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.307730  127900 kubeconfig.go:92] found "old-k8s-version-549640" server: "https://192.168.61.146:8443"
	I1212 23:16:51.311721  127900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:16:51.320683  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.320831  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.332122  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.332145  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.332197  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.342755  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.843464  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.843575  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.854933  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:52.343493  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.343579  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.354884  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:49.816605  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:49.816934  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:49.816968  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:49.816881  128998 retry.go:31] will retry after 1.775884699s: waiting for machine to come up
	I1212 23:16:51.594388  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:51.594915  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:51.594952  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:51.594866  128998 retry.go:31] will retry after 1.948886075s: waiting for machine to come up
	I1212 23:16:53.546035  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:53.546503  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:53.546538  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:53.546441  128998 retry.go:31] will retry after 3.530621748s: waiting for machine to come up
	I1212 23:16:52.842987  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.843085  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.854637  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.343155  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.343261  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.354960  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.843482  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.843555  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.854488  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.342926  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.343028  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.357489  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.843024  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.843111  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.854764  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.343252  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.343363  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.354798  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.843831  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.843931  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.855077  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.343753  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.343827  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.354659  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.843304  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.843423  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.854727  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.343292  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.343428  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.354360  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.078854  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:57.079265  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:57.079287  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:57.079224  128998 retry.go:31] will retry after 3.552473985s: waiting for machine to come up
	I1212 23:17:01.924642  128282 start.go:369] acquired machines lock for "default-k8s-diff-port-850839" in 4m30.811975302s
	I1212 23:17:01.924716  128282 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:01.924725  128282 fix.go:54] fixHost starting: 
	I1212 23:17:01.925164  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:01.925207  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:01.942895  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I1212 23:17:01.943340  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:01.943906  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:01.943938  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:01.944371  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:01.944594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:01.944819  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:01.946719  128282 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850839: state=Stopped err=<nil>
	I1212 23:17:01.946759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	W1212 23:17:01.946947  128282 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:01.949597  128282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850839" ...
	I1212 23:16:57.843410  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.843484  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.854821  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.343379  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.343470  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.354868  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.843473  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.843594  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.854752  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.343324  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.343432  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.354442  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.842979  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.843086  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.854537  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.343125  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.343201  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.354401  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.843565  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.843642  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.854663  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:01.321433  127900 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:01.321466  127900 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:01.321477  127900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:01.321534  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:01.361643  127900 cri.go:89] found id: ""
	I1212 23:17:01.361739  127900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:01.380002  127900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:01.388875  127900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:01.388944  127900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397644  127900 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397690  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:01.528111  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:00.635998  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636444  128156 main.go:141] libmachine: (no-preload-115023) Found IP for machine: 192.168.72.32
	I1212 23:17:00.636462  128156 main.go:141] libmachine: (no-preload-115023) Reserving static IP address...
	I1212 23:17:00.636478  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has current primary IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.636925  128156 main.go:141] libmachine: (no-preload-115023) DBG | skip adding static IP to network mk-no-preload-115023 - found existing host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"}
	I1212 23:17:00.636939  128156 main.go:141] libmachine: (no-preload-115023) Reserved static IP address: 192.168.72.32
	I1212 23:17:00.636961  128156 main.go:141] libmachine: (no-preload-115023) Waiting for SSH to be available...
	I1212 23:17:00.636971  128156 main.go:141] libmachine: (no-preload-115023) DBG | Getting to WaitForSSH function...
	I1212 23:17:00.639074  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639400  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.639443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639546  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH client type: external
	I1212 23:17:00.639586  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa (-rw-------)
	I1212 23:17:00.639629  128156 main.go:141] libmachine: (no-preload-115023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:00.639644  128156 main.go:141] libmachine: (no-preload-115023) DBG | About to run SSH command:
	I1212 23:17:00.639663  128156 main.go:141] libmachine: (no-preload-115023) DBG | exit 0
	I1212 23:17:00.734735  128156 main.go:141] libmachine: (no-preload-115023) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:00.735132  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetConfigRaw
	I1212 23:17:00.735813  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:00.738429  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.738828  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.738871  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.739049  128156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/config.json ...
	I1212 23:17:00.739276  128156 machine.go:88] provisioning docker machine ...
	I1212 23:17:00.739299  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:00.739537  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739695  128156 buildroot.go:166] provisioning hostname "no-preload-115023"
	I1212 23:17:00.739717  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739879  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.742096  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742404  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.742443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742591  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.742756  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.742925  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.743067  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.743224  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.743733  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.743751  128156 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-115023 && echo "no-preload-115023" | sudo tee /etc/hostname
	I1212 23:17:00.888573  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-115023
	
	I1212 23:17:00.888610  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.891302  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891619  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.891664  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891852  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.892092  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892263  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892419  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.892584  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.892911  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.892930  128156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-115023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-115023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-115023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:01.032180  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:01.032222  128156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:01.032257  128156 buildroot.go:174] setting up certificates
	I1212 23:17:01.032273  128156 provision.go:83] configureAuth start
	I1212 23:17:01.032291  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:01.032653  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.035024  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035334  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.035361  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035494  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.037594  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.037898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.037930  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.038066  128156 provision.go:138] copyHostCerts
	I1212 23:17:01.038122  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:01.038143  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:01.038202  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:01.038322  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:01.038334  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:01.038355  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:01.038470  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:01.038481  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:01.038499  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:01.038575  128156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.no-preload-115023 san=[192.168.72.32 192.168.72.32 localhost 127.0.0.1 minikube no-preload-115023]
	I1212 23:17:01.146965  128156 provision.go:172] copyRemoteCerts
	I1212 23:17:01.147027  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:01.147053  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.149326  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149621  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.149656  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149774  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.149969  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.150118  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.150238  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.244271  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:01.267206  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 23:17:01.289286  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:01.311940  128156 provision.go:86] duration metric: configureAuth took 279.648376ms
	I1212 23:17:01.311970  128156 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:01.312144  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:17:01.312229  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.314543  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.314881  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.314907  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.315055  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.315281  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315469  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315658  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.315821  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.316162  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.316185  128156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:01.644687  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:01.644737  128156 machine.go:91] provisioned docker machine in 905.44182ms
	I1212 23:17:01.644750  128156 start.go:300] post-start starting for "no-preload-115023" (driver="kvm2")
	I1212 23:17:01.644764  128156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:01.644781  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.645148  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:01.645186  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.647976  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648333  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.648369  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648572  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.648769  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.648972  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.649102  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.746191  128156 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:01.750374  128156 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:01.750416  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:01.750499  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:01.750605  128156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:01.750721  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:01.760389  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:01.788014  128156 start.go:303] post-start completed in 143.244652ms
	I1212 23:17:01.788052  128156 fix.go:56] fixHost completed within 20.420006869s
	I1212 23:17:01.788083  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.790868  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791357  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.791392  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791675  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.791911  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792276  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.792463  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.792889  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.792903  128156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:01.924437  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423021.865464875
	
	I1212 23:17:01.924464  128156 fix.go:206] guest clock: 1702423021.865464875
	I1212 23:17:01.924477  128156 fix.go:219] Guest: 2023-12-12 23:17:01.865464875 +0000 UTC Remote: 2023-12-12 23:17:01.788058057 +0000 UTC m=+282.352654726 (delta=77.406818ms)
	I1212 23:17:01.924532  128156 fix.go:190] guest clock delta is within tolerance: 77.406818ms
	I1212 23:17:01.924542  128156 start.go:83] releasing machines lock for "no-preload-115023", held for 20.556534447s
	I1212 23:17:01.924581  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.924871  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.927873  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928206  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.928238  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928450  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929098  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929301  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929387  128156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:01.929448  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.929516  128156 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:01.929559  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.932560  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.932593  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933001  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933031  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933059  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933081  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933340  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933430  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933547  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933659  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933919  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.933923  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.934097  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.934170  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:02.029559  128156 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:02.056382  128156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:02.199375  128156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:02.207131  128156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:02.207208  128156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:02.227083  128156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:02.227111  128156 start.go:475] detecting cgroup driver to use...
	I1212 23:17:02.227174  128156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:02.241611  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:02.253610  128156 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:02.253675  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:02.266973  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:02.280712  128156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:02.406583  128156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:02.548155  128156 docker.go:219] disabling docker service ...
	I1212 23:17:02.548235  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:02.563410  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:02.575968  128156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:02.697146  128156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:02.828963  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:02.842559  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:02.865357  128156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:02.865433  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.878154  128156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:02.878231  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.892188  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.903286  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.915201  128156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:02.927665  128156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:02.938466  128156 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:02.938538  128156 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:02.954428  128156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:02.966197  128156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:03.109663  128156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:03.322982  128156 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:03.323068  128156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:03.329800  128156 start.go:543] Will wait 60s for crictl version
	I1212 23:17:03.329866  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.335779  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:03.385099  128156 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:03.385190  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.438085  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.482280  128156 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 23:17:03.483965  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:03.487086  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487464  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:03.487495  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487694  128156 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:03.492027  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:03.506463  128156 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:17:03.506503  128156 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:03.544301  128156 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 23:17:03.544329  128156 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:17:03.544386  128156 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.544441  128156 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.544474  128156 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.544440  128156 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.544509  128156 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.544527  128156 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 23:17:03.545656  128156 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.545678  128156 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.545726  128156 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.545657  128156 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.545747  128156 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.545758  128156 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.545662  128156 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 23:17:03.546098  128156 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.724978  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.727403  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.739085  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 23:17:03.747535  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.748286  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.780484  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.826808  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.834529  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.840840  128156 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 23:17:03.840893  128156 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.840940  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.868056  128156 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 23:17:03.868106  128156 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.868157  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.043948  128156 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 23:17:04.044014  128156 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.044063  128156 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 23:17:04.044102  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044167  128156 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 23:17:04.044207  128156 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.044252  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044103  128156 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.044334  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044375  128156 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 23:17:04.044401  128156 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.044444  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:04.044446  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044489  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:04.044401  128156 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 23:17:04.044520  128156 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.044545  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.065308  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.065326  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.065380  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.065495  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.065541  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.167939  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.168062  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.207196  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.207344  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.261679  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 23:17:04.261767  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:04.293250  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 23:17:04.293382  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:04.298843  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.298927  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.298960  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.299043  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.299066  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 23:17:04.299125  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:04.299187  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299201  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.299219  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299272  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.302178  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 23:17:04.302502  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 23:17:04.311377  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 23:17:04.311421  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 23:17:01.950988  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Start
	I1212 23:17:01.951206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring networks are active...
	I1212 23:17:01.952109  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network default is active
	I1212 23:17:01.952459  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network mk-default-k8s-diff-port-850839 is active
	I1212 23:17:01.953041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Getting domain xml...
	I1212 23:17:01.953769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Creating domain...
	I1212 23:17:03.377195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting to get IP...
	I1212 23:17:03.378157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378619  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378696  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.378589  129129 retry.go:31] will retry after 235.08446ms: waiting for machine to come up
	I1212 23:17:03.614763  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615258  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615288  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.615169  129129 retry.go:31] will retry after 349.415903ms: waiting for machine to come up
	I1212 23:17:03.965990  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966570  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966670  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.966628  129129 retry.go:31] will retry after 318.332956ms: waiting for machine to come up
	I1212 23:17:04.286225  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286728  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.286676  129129 retry.go:31] will retry after 554.258457ms: waiting for machine to come up
	I1212 23:17:04.843362  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843928  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843975  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.843882  129129 retry.go:31] will retry after 539.399246ms: waiting for machine to come up
	I1212 23:17:05.384807  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385237  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385267  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:05.385213  129129 retry.go:31] will retry after 793.160743ms: waiting for machine to come up
	I1212 23:17:02.653275  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125123388s)
	I1212 23:17:02.653305  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:02.888884  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.005743  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.124339  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:03.124427  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.154719  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.679193  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.179381  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.678654  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.701429  127900 api_server.go:72] duration metric: took 1.577102613s to wait for apiserver process to appear ...
	I1212 23:17:04.701456  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:04.701476  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:06.586652  128156 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.287578103s)
	I1212 23:17:06.586693  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 23:17:06.586710  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.28741029s)
	I1212 23:17:06.586731  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 23:17:06.586768  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:06.586859  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:09.053122  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.466228622s)
	I1212 23:17:09.053156  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 23:17:09.053187  128156 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:09.053239  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:06.180206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180792  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180826  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:06.180767  129129 retry.go:31] will retry after 1.183884482s: waiting for machine to come up
	I1212 23:17:07.365977  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366537  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:07.366465  129129 retry.go:31] will retry after 1.171346567s: waiting for machine to come up
	I1212 23:17:08.539985  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540457  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540493  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:08.540397  129129 retry.go:31] will retry after 1.176896883s: waiting for machine to come up
	I1212 23:17:09.718657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719110  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719142  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:09.719045  129129 retry.go:31] will retry after 2.075378734s: waiting for machine to come up
	I1212 23:17:09.703531  127900 api_server.go:269] stopped: https://192.168.61.146:8443/healthz: Get "https://192.168.61.146:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 23:17:09.703600  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:10.880325  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:10.880391  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:11.380886  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.408357  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.408420  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:11.880867  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.888735  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.888783  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:12.381393  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:12.390271  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:12.399780  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:12.399818  127900 api_server.go:131] duration metric: took 7.698353874s to wait for apiserver health ...
	I1212 23:17:12.399832  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:17:12.399842  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:12.401614  127900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:12.403088  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:12.416722  127900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:12.439451  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:12.452826  127900 system_pods.go:59] 7 kube-system pods found
	I1212 23:17:12.452870  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:12.452879  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:12.452886  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:12.452893  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Pending
	I1212 23:17:12.452901  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:12.452907  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:12.452914  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:12.452924  127900 system_pods.go:74] duration metric: took 13.446573ms to wait for pod list to return data ...
	I1212 23:17:12.452937  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:12.459638  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:12.459679  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:12.459697  127900 node_conditions.go:105] duration metric: took 6.754094ms to run NodePressure ...
	I1212 23:17:12.459722  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:12.767529  127900 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775696  127900 kubeadm.go:787] kubelet initialised
	I1212 23:17:12.775720  127900 kubeadm.go:788] duration metric: took 8.16519ms waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775730  127900 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:12.781477  127900 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.789136  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789163  127900 pod_ready.go:81] duration metric: took 7.661481ms waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.789174  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789183  127900 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.794618  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794658  127900 pod_ready.go:81] duration metric: took 5.45869ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.794671  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794689  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.801021  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801052  127900 pod_ready.go:81] duration metric: took 6.346779ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.801065  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801074  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.845211  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845243  127900 pod_ready.go:81] duration metric: took 44.152184ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.845256  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845263  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.244325  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244373  127900 pod_ready.go:81] duration metric: took 399.10083ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.244387  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244403  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.644414  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644512  127900 pod_ready.go:81] duration metric: took 400.062676ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.644545  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644566  127900 pod_ready.go:38] duration metric: took 868.822745ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:13.644601  127900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:13.674724  127900 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:13.674813  127900 kubeadm.go:640] restartCluster took 22.377904832s
	I1212 23:17:13.674838  127900 kubeadm.go:406] StartCluster complete in 22.437279451s
	I1212 23:17:13.674872  127900 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.674959  127900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:13.677846  127900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.680423  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:13.680690  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:17:13.680746  127900 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:13.680815  127900 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-549640"
	I1212 23:17:13.680839  127900 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-549640"
	W1212 23:17:13.680850  127900 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:13.680938  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.681342  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.681377  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.681658  127900 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-549640"
	I1212 23:17:13.681702  127900 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-549640"
	W1212 23:17:13.681711  127900 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:13.681780  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.682200  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.682237  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.682462  127900 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-549640"
	I1212 23:17:13.682544  127900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-549640"
	I1212 23:17:13.683062  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.683126  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.702138  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1212 23:17:13.702380  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I1212 23:17:13.702684  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702944  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702956  127900 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-549640" context rescaled to 1 replicas
	I1212 23:17:13.702990  127900 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:13.704074  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.704211  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.706640  127900 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:13.708293  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:13.706664  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706671  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706806  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I1212 23:17:13.709240  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.709383  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.709441  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.709852  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.709874  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.710209  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.710818  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.710867  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.711123  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.711765  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.711842  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.717964  127900 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-549640"
	W1212 23:17:13.717989  127900 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:13.718020  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.718447  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.718493  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.738529  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1212 23:17:13.739214  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.739827  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.739854  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.740246  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.740847  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.740917  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.747710  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I1212 23:17:13.748150  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.748772  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.748793  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.749177  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.749348  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.749413  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I1212 23:17:13.750144  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.751385  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.753201  127900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:13.754814  127900 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:13.754827  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:13.754840  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.754702  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.754893  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.756310  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.756707  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.758906  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.758937  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.758961  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.760001  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.760051  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.760288  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.763360  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.763607  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.770081  127900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:10.003107  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 23:17:10.003162  128156 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:10.003218  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:12.288548  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.285296733s)
	I1212 23:17:12.288591  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 23:17:12.288623  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:12.288674  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:13.771543  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:13.771565  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:13.769624  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I1212 23:17:13.771589  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.772282  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.772841  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.772898  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.773284  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.773451  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.775327  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.775699  127900 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:13.775713  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:13.775738  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.779093  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779539  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.779563  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779784  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.779957  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.780110  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.780255  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.787297  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.787663  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.787729  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.788010  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.789645  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.789826  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.790032  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.956110  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:13.956139  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:13.974813  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:14.024369  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:14.045961  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:14.045998  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:14.133161  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.133192  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:14.342486  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.827118  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.146649731s)
	I1212 23:17:14.827249  127900 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:14.827300  127900 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.118984074s)
	I1212 23:17:14.827324  127900 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:15.050916  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.076057269s)
	I1212 23:17:15.051030  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051049  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.051444  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.051497  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.051508  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.051517  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051527  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.053501  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.053573  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.053586  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.229413  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.229504  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.229934  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.231467  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.231489  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.522482  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.49806272s)
	I1212 23:17:15.522554  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.522574  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.522920  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.522971  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.522989  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.523009  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.523024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.523301  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.523322  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558083  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.21554598s)
	I1212 23:17:15.558173  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558200  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.558568  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.558591  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558603  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558613  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.559348  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.559370  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.559364  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.559387  127900 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-549640"
	I1212 23:17:15.562044  127900 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 23:17:11.796385  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796896  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796930  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:11.796831  129129 retry.go:31] will retry after 2.569081306s: waiting for machine to come up
	I1212 23:17:14.369090  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:14.369522  129129 retry.go:31] will retry after 3.566691604s: waiting for machine to come up
	I1212 23:17:15.563724  127900 addons.go:502] enable addons completed in 1.882971652s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 23:17:17.065214  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:15.574585  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.285870336s)
	I1212 23:17:15.574622  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 23:17:15.574667  128156 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:15.574736  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:17.937618  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938021  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938052  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:17.937984  129129 retry.go:31] will retry after 2.790781234s: waiting for machine to come up
	I1212 23:17:20.730659  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731151  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731179  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:20.731128  129129 retry.go:31] will retry after 5.345575973s: waiting for machine to come up
	I1212 23:17:19.564344  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:21.564330  127900 node_ready.go:49] node "old-k8s-version-549640" has status "Ready":"True"
	I1212 23:17:21.564356  127900 node_ready.go:38] duration metric: took 6.737022414s waiting for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:21.564367  127900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:21.569573  127900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:19.606668  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.031891087s)
	I1212 23:17:19.606701  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 23:17:19.606731  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:19.606791  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:21.765860  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.159035751s)
	I1212 23:17:21.765896  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 23:17:21.765934  128156 cache_images.go:123] Successfully loaded all cached images
	I1212 23:17:21.765944  128156 cache_images.go:92] LoadImages completed in 18.221602939s
	I1212 23:17:21.766033  128156 ssh_runner.go:195] Run: crio config
	I1212 23:17:21.818966  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:21.818996  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:21.819021  128156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:21.819048  128156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-115023 NodeName:no-preload-115023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:21.819220  128156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-115023"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:21.819310  128156 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-115023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:21.819369  128156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 23:17:21.829605  128156 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:21.829690  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:21.838518  128156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1212 23:17:21.854214  128156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 23:17:21.869927  128156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1212 23:17:21.886723  128156 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:21.890481  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:21.902964  128156 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023 for IP: 192.168.72.32
	I1212 23:17:21.902993  128156 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:21.903156  128156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:21.903194  128156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:21.903275  128156 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.key
	I1212 23:17:21.903357  128156 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key.9d394d40
	I1212 23:17:21.903393  128156 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key
	I1212 23:17:21.903509  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:21.903540  128156 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:21.903550  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:21.903583  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:21.903623  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:21.903647  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:21.903687  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:21.904310  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:21.928095  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:17:21.951412  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:21.974936  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:21.997877  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:22.020598  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:22.042859  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:22.065941  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:22.088688  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:22.110493  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:22.132736  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:22.154394  128156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:22.170427  128156 ssh_runner.go:195] Run: openssl version
	I1212 23:17:22.176106  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:22.186617  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191355  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191423  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.196989  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:22.208456  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:22.219355  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224154  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224224  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.230069  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:22.240929  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:22.251836  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256441  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256496  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.261952  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:22.272452  128156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:22.277105  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:22.283114  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:22.288860  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:22.294416  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:22.300148  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:22.306380  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:22.316419  128156 kubeadm.go:404] StartCluster: {Name:no-preload-115023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:22.316550  128156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:22.316623  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:22.358616  128156 cri.go:89] found id: ""
	I1212 23:17:22.358703  128156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:22.368800  128156 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:22.368823  128156 kubeadm.go:636] restartCluster start
	I1212 23:17:22.368883  128156 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:22.378570  128156 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.380161  128156 kubeconfig.go:92] found "no-preload-115023" server: "https://192.168.72.32:8443"
	I1212 23:17:22.383451  128156 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:22.392995  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.393064  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.405318  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.405337  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.405370  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.416721  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.917468  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.917571  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.929995  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.417616  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.417752  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.430907  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.917522  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.917607  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.929655  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:24.417316  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.417427  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.429590  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.436348  127760 start.go:369] acquired machines lock for "embed-certs-809120" in 1m2.018372087s
	I1212 23:17:27.436407  127760 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:27.436418  127760 fix.go:54] fixHost starting: 
	I1212 23:17:27.436818  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:27.436856  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:27.453079  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I1212 23:17:27.453449  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:27.453967  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:17:27.453999  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:27.454365  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:27.454580  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:27.454743  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:17:27.456367  127760 fix.go:102] recreateIfNeeded on embed-certs-809120: state=Stopped err=<nil>
	I1212 23:17:27.456395  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	W1212 23:17:27.456549  127760 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:27.458402  127760 out.go:177] * Restarting existing kvm2 VM for "embed-certs-809120" ...
	I1212 23:17:23.588762  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:26.087305  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:27.459818  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Start
	I1212 23:17:27.459994  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring networks are active...
	I1212 23:17:27.460587  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network default is active
	I1212 23:17:27.460997  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network mk-embed-certs-809120 is active
	I1212 23:17:27.461361  127760 main.go:141] libmachine: (embed-certs-809120) Getting domain xml...
	I1212 23:17:27.462026  127760 main.go:141] libmachine: (embed-certs-809120) Creating domain...
	I1212 23:17:26.081099  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Found IP for machine: 192.168.39.180
	I1212 23:17:26.081626  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has current primary IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081637  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserving static IP address...
	I1212 23:17:26.082029  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserved static IP address: 192.168.39.180
	I1212 23:17:26.082080  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.082119  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for SSH to be available...
	I1212 23:17:26.082157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | skip adding static IP to network mk-default-k8s-diff-port-850839 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"}
	I1212 23:17:26.082182  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Getting to WaitForSSH function...
	I1212 23:17:26.084444  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.084803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084864  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH client type: external
	I1212 23:17:26.084925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa (-rw-------)
	I1212 23:17:26.084971  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:26.084992  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | About to run SSH command:
	I1212 23:17:26.085006  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | exit 0
	I1212 23:17:26.175122  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:26.175455  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetConfigRaw
	I1212 23:17:26.176092  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.178747  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179016  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.179044  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179388  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:17:26.179602  128282 machine.go:88] provisioning docker machine ...
	I1212 23:17:26.179624  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:26.179853  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180033  128282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850839"
	I1212 23:17:26.180051  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180209  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.182470  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.182812  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.182848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.183003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.183193  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183374  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183538  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.183709  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.184100  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.184115  128282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850839 && echo "default-k8s-diff-port-850839" | sudo tee /etc/hostname
	I1212 23:17:26.313520  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850839
	
	I1212 23:17:26.313562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.316848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.317633  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317817  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.318047  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318229  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318344  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.318567  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.318888  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.318907  128282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850839/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:26.443174  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:26.443206  128282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:26.443224  128282 buildroot.go:174] setting up certificates
	I1212 23:17:26.443255  128282 provision.go:83] configureAuth start
	I1212 23:17:26.443273  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.443628  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.446155  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446467  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.446501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446568  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.449661  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450005  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.450041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450170  128282 provision.go:138] copyHostCerts
	I1212 23:17:26.450235  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:26.450258  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:26.450330  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:26.450442  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:26.450453  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:26.450483  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:26.450555  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:26.450565  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:26.450592  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:26.450656  128282 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850839 san=[192.168.39.180 192.168.39.180 localhost 127.0.0.1 minikube default-k8s-diff-port-850839]
	I1212 23:17:26.688969  128282 provision.go:172] copyRemoteCerts
	I1212 23:17:26.689035  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:26.689060  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.691731  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692004  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.692033  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692207  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.692441  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.692607  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.692736  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:26.781407  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:26.804712  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 23:17:26.827036  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:26.848977  128282 provision.go:86] duration metric: configureAuth took 405.706324ms
	I1212 23:17:26.849006  128282 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:26.849214  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:26.849310  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.851925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852281  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.852314  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852486  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.852679  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.852860  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.853003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.853172  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.853688  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.853711  128282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:27.183932  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:27.183961  128282 machine.go:91] provisioned docker machine in 1.004345653s
	I1212 23:17:27.183972  128282 start.go:300] post-start starting for "default-k8s-diff-port-850839" (driver="kvm2")
	I1212 23:17:27.183982  128282 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:27.183999  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.184348  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:27.184398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.187375  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187709  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.187759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187858  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.188054  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.188248  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.188400  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.277858  128282 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:27.282128  128282 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:27.282157  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:27.282244  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:27.282368  128282 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:27.282481  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:27.291755  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:27.313541  128282 start.go:303] post-start completed in 129.554425ms
	I1212 23:17:27.313563  128282 fix.go:56] fixHost completed within 25.388839079s
	I1212 23:17:27.313586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.316388  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316737  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.316760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316934  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.317141  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317343  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317540  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.317789  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:27.318143  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:27.318158  128282 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:27.436207  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423047.383892438
	
	I1212 23:17:27.436230  128282 fix.go:206] guest clock: 1702423047.383892438
	I1212 23:17:27.436237  128282 fix.go:219] Guest: 2023-12-12 23:17:27.383892438 +0000 UTC Remote: 2023-12-12 23:17:27.313567546 +0000 UTC m=+296.357388926 (delta=70.324892ms)
	I1212 23:17:27.436261  128282 fix.go:190] guest clock delta is within tolerance: 70.324892ms
	I1212 23:17:27.436266  128282 start.go:83] releasing machines lock for "default-k8s-diff-port-850839", held for 25.511577503s
	I1212 23:17:27.436289  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.436571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:27.439315  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439697  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.439730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440396  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440660  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440741  128282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:27.440793  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.440873  128282 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:27.440891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.443558  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443880  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443938  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.443965  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444132  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444338  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444369  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.444398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444741  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.444788  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444907  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.445073  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.528730  128282 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:27.563590  128282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:27.715220  128282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:27.722775  128282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:27.722883  128282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:27.743217  128282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:27.743264  128282 start.go:475] detecting cgroup driver to use...
	I1212 23:17:27.743344  128282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:27.759125  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:27.772532  128282 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:27.772602  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:27.786439  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:27.800413  128282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:27.905626  128282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:28.037279  128282 docker.go:219] disabling docker service ...
	I1212 23:17:28.037362  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:28.050670  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:28.063551  128282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:28.195512  128282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:28.306881  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:28.324506  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:28.344908  128282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:28.344992  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.354788  128282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:28.354883  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.364157  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.373415  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.383391  128282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:28.393203  128282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:28.401935  128282 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:28.402006  128282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:28.413618  128282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:28.426007  128282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:28.536725  128282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:28.711815  128282 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:28.711892  128282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:28.717242  128282 start.go:543] Will wait 60s for crictl version
	I1212 23:17:28.717306  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:17:28.724383  128282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:28.779687  128282 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:28.779781  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.834147  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.894131  128282 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:24.917347  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.917438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.928690  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.417259  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.417343  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.428544  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.917136  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.917212  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.927813  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.417826  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.417917  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.428147  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.917724  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.917803  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.929515  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.416997  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.417102  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.428180  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.917712  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.917830  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.931264  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.417370  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.417479  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.432478  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.916907  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.917039  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.932698  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:29.416883  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.416989  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.434138  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.895767  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:28.898899  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899233  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:28.899276  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899500  128282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:28.903950  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:28.917270  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:28.917383  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:28.956752  128282 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:28.956832  128282 ssh_runner.go:195] Run: which lz4
	I1212 23:17:28.961387  128282 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:28.965850  128282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:28.965925  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:30.869493  128282 crio.go:444] Took 1.908152 seconds to copy over tarball
	I1212 23:17:30.869580  128282 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:28.610279  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:31.088625  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:28.873664  127760 main.go:141] libmachine: (embed-certs-809120) Waiting to get IP...
	I1212 23:17:28.874489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:28.874895  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:28.874992  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:28.874848  129329 retry.go:31] will retry after 244.313261ms: waiting for machine to come up
	I1212 23:17:29.120442  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.120959  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.120997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.120852  129329 retry.go:31] will retry after 369.234988ms: waiting for machine to come up
	I1212 23:17:29.491516  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.492081  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.492124  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.492035  129329 retry.go:31] will retry after 448.746179ms: waiting for machine to come up
	I1212 23:17:29.942643  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.943286  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.943319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.943229  129329 retry.go:31] will retry after 520.98965ms: waiting for machine to come up
	I1212 23:17:30.465955  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:30.466468  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:30.466503  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:30.466430  129329 retry.go:31] will retry after 617.123622ms: waiting for machine to come up
	I1212 23:17:31.085159  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.085706  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.085746  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.085665  129329 retry.go:31] will retry after 853.539861ms: waiting for machine to come up
	I1212 23:17:31.940795  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.941240  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.941265  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.941169  129329 retry.go:31] will retry after 960.346145ms: waiting for machine to come up
	I1212 23:17:29.916897  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.917007  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.932055  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.417555  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.417657  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.433218  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.917841  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.917967  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.933255  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.417271  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.417357  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.429192  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.917804  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.917908  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.930333  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:32.393106  128156 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:32.393209  128156 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:32.393228  128156 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:32.393315  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:32.445688  128156 cri.go:89] found id: ""
	I1212 23:17:32.445774  128156 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:32.462269  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:32.473687  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:32.473768  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483043  128156 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483075  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:32.656758  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.442637  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.666131  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.751061  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.855861  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:33.855952  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:33.879438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.403317  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.178083  128282 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.308463726s)
	I1212 23:17:34.178124  128282 crio.go:451] Took 3.308601 seconds to extract the tarball
	I1212 23:17:34.178136  128282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:34.219740  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:34.268961  128282 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:34.268987  128282 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:34.269051  128282 ssh_runner.go:195] Run: crio config
	I1212 23:17:34.326979  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:34.327007  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:34.327033  128282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:34.327060  128282 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850839 NodeName:default-k8s-diff-port-850839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:34.327252  128282 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:34.327353  128282 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 23:17:34.327425  128282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:34.338300  128282 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:34.338385  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:34.347329  128282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 23:17:34.364120  128282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:34.380374  128282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 23:17:34.398219  128282 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:34.402134  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:34.415197  128282 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839 for IP: 192.168.39.180
	I1212 23:17:34.415252  128282 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:34.415436  128282 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:34.415472  128282 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:34.415540  128282 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.key
	I1212 23:17:34.415593  128282 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key.66237cde
	I1212 23:17:34.415626  128282 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key
	I1212 23:17:34.415739  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:34.415780  128282 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:34.415793  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:34.415841  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:34.415886  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:34.415931  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:34.415990  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:34.416632  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:34.440783  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:34.466303  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:34.491267  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:17:34.516659  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:34.542472  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:34.569367  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:34.599627  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:34.628781  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:34.655361  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:34.681199  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:34.706068  128282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:34.724142  128282 ssh_runner.go:195] Run: openssl version
	I1212 23:17:34.730108  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:34.740221  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745118  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745203  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.751091  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:34.761120  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:34.771456  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776480  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776559  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.782833  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:34.793597  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:34.804519  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809767  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809831  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.815838  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:34.825967  128282 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:34.831487  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:34.838280  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:34.845663  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:34.854810  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:34.862962  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:34.869641  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:34.876373  128282 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:34.876509  128282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:34.876579  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:34.918413  128282 cri.go:89] found id: ""
	I1212 23:17:34.918486  128282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:34.928267  128282 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:34.928305  128282 kubeadm.go:636] restartCluster start
	I1212 23:17:34.928396  128282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:34.938202  128282 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.939397  128282 kubeconfig.go:92] found "default-k8s-diff-port-850839" server: "https://192.168.39.180:8444"
	I1212 23:17:34.941945  128282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:34.953458  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.953552  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.965537  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.965561  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.965623  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.977454  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.478209  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.478292  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.505825  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.978537  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.978615  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.991422  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:33.591861  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:35.629760  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:32.902889  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:32.903556  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:32.903588  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:32.903500  129329 retry.go:31] will retry after 1.225619987s: waiting for machine to come up
	I1212 23:17:34.130560  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:34.131066  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:34.131098  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:34.131009  129329 retry.go:31] will retry after 1.544530633s: waiting for machine to come up
	I1212 23:17:35.677455  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:35.677916  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:35.677939  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:35.677902  129329 retry.go:31] will retry after 1.740004665s: waiting for machine to come up
	I1212 23:17:37.419743  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:37.420167  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:37.420203  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:37.420121  129329 retry.go:31] will retry after 2.220250897s: waiting for machine to come up
	I1212 23:17:34.902923  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.402835  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.903269  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.403728  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.903298  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.403775  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.903663  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.403892  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.429370  128156 api_server.go:72] duration metric: took 4.573508338s to wait for apiserver process to appear ...
	I1212 23:17:38.429402  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:38.429424  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.429952  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.430019  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.430455  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.931234  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:36.478240  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.478317  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.494437  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:36.978574  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.978654  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.995711  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.478404  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.478484  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.492356  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.977979  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.978123  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.993637  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.478102  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.478227  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.494347  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.977645  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.977771  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.994288  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.477795  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.477942  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.495986  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.978587  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.978695  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.994551  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.477958  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.478056  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.492956  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.978560  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.978663  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.994199  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.089524  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:40.591793  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:39.643094  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:39.643562  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:39.643603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:39.643508  129329 retry.go:31] will retry after 2.987735855s: waiting for machine to come up
	I1212 23:17:42.633477  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:42.633958  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:42.633993  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:42.633907  129329 retry.go:31] will retry after 3.131576961s: waiting for machine to come up
	I1212 23:17:41.334632  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:41.334685  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:41.334703  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.392719  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.392768  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.431413  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.445393  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.445428  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.930605  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.935880  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.935918  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.430551  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.435690  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:42.435720  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.931341  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.936295  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:17:42.944125  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:17:42.944163  128156 api_server.go:131] duration metric: took 4.514753942s to wait for apiserver health ...
	I1212 23:17:42.944173  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:42.944179  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:42.945951  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:42.947258  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:42.957745  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:42.978269  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:42.990231  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:42.990267  128156 system_pods.go:61] "coredns-76f75df574-2rdhr" [266c2440-a927-476c-b918-d0712834fc2f] Running
	I1212 23:17:42.990274  128156 system_pods.go:61] "etcd-no-preload-115023" [522ee237-12e0-4b83-9e20-05713cd87c7d] Running
	I1212 23:17:42.990281  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [9048886a-1b8b-407d-bd71-c5a850d88a5f] Running
	I1212 23:17:42.990287  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [4652e03f-2622-41d8-8791-bcc648d43432] Running
	I1212 23:17:42.990292  128156 system_pods.go:61] "kube-proxy-rqhmc" [b7514603-3389-4a38-b24a-e9c7948364bc] Running
	I1212 23:17:42.990299  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [7ce16391-9627-454b-b0de-27af47921997] Running
	I1212 23:17:42.990308  128156 system_pods.go:61] "metrics-server-57f55c9bc5-b42rv" [f27bd873-340b-4ae1-922a-ed8f52d558dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:42.990316  128156 system_pods.go:61] "storage-provisioner" [d9565f7f-dcf4-4e4d-88fd-e8a54bbf0e40] Running
	I1212 23:17:42.990327  128156 system_pods.go:74] duration metric: took 12.031472ms to wait for pod list to return data ...
	I1212 23:17:42.990347  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:42.994787  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:42.994817  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:42.994827  128156 node_conditions.go:105] duration metric: took 4.471497ms to run NodePressure ...
	I1212 23:17:42.994844  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.281299  128156 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:43.286299  128156 retry.go:31] will retry after 184.15509ms: kubelet not initialised
	I1212 23:17:43.476354  128156 retry.go:31] will retry after 533.806598ms: kubelet not initialised
	I1212 23:17:44.036349  128156 retry.go:31] will retry after 483.473669ms: kubelet not initialised
	I1212 23:17:41.477798  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.477898  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.493963  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:41.977991  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.978077  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.994590  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.478242  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.478334  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.495374  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.978495  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.978597  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.992337  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.477604  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.477667  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.491061  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.977638  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.977754  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.991654  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.478308  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:44.478409  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:44.494965  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.953708  128282 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:44.953763  128282 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:44.953780  128282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:44.953874  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:45.003440  128282 cri.go:89] found id: ""
	I1212 23:17:45.003519  128282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:45.021471  128282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:45.036134  128282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:45.036203  128282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049188  128282 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049214  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.197549  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.958707  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.088583  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.587947  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:47.588918  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.768814  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:45.769238  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:45.769270  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:45.769171  129329 retry.go:31] will retry after 3.722952815s: waiting for machine to come up
	I1212 23:17:44.529285  128156 kubeadm.go:787] kubelet initialised
	I1212 23:17:44.529310  128156 kubeadm.go:788] duration metric: took 1.247981757s waiting for restarted kubelet to initialise ...
	I1212 23:17:44.529321  128156 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:44.551751  128156 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:46.588427  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:48.589582  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:46.161702  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.251040  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.344286  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:46.344385  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.359646  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.875339  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.375793  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.875532  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.375394  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.875412  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.903144  128282 api_server.go:72] duration metric: took 2.558861066s to wait for apiserver process to appear ...
	I1212 23:17:48.903170  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:48.903188  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.903660  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:48.903697  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.904122  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:49.404880  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:50.088813  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.089208  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:49.494927  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495446  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has current primary IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495474  127760 main.go:141] libmachine: (embed-certs-809120) Found IP for machine: 192.168.50.221
	I1212 23:17:49.495489  127760 main.go:141] libmachine: (embed-certs-809120) Reserving static IP address...
	I1212 23:17:49.495884  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.495933  127760 main.go:141] libmachine: (embed-certs-809120) DBG | skip adding static IP to network mk-embed-certs-809120 - found existing host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"}
	I1212 23:17:49.495954  127760 main.go:141] libmachine: (embed-certs-809120) Reserved static IP address: 192.168.50.221
	I1212 23:17:49.495971  127760 main.go:141] libmachine: (embed-certs-809120) Waiting for SSH to be available...
	I1212 23:17:49.495987  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Getting to WaitForSSH function...
	I1212 23:17:49.498007  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498362  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.498398  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498514  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH client type: external
	I1212 23:17:49.498545  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa (-rw-------)
	I1212 23:17:49.498583  127760 main.go:141] libmachine: (embed-certs-809120) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:49.498598  127760 main.go:141] libmachine: (embed-certs-809120) DBG | About to run SSH command:
	I1212 23:17:49.498615  127760 main.go:141] libmachine: (embed-certs-809120) DBG | exit 0
	I1212 23:17:49.635655  127760 main.go:141] libmachine: (embed-certs-809120) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:49.636039  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetConfigRaw
	I1212 23:17:49.636795  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.639601  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640032  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.640059  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640367  127760 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/config.json ...
	I1212 23:17:49.640604  127760 machine.go:88] provisioning docker machine ...
	I1212 23:17:49.640629  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:49.640901  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641044  127760 buildroot.go:166] provisioning hostname "embed-certs-809120"
	I1212 23:17:49.641066  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641184  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.643599  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644050  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.644082  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644210  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.644439  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644612  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644791  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.644961  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.645333  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.645350  127760 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-809120 && echo "embed-certs-809120" | sudo tee /etc/hostname
	I1212 23:17:49.779263  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-809120
	
	I1212 23:17:49.779298  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.782329  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782739  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.782772  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782891  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.783133  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783306  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783466  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.783641  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.784029  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.784055  127760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-809120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-809120/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-809120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:49.914603  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:49.914641  127760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:49.914673  127760 buildroot.go:174] setting up certificates
	I1212 23:17:49.914686  127760 provision.go:83] configureAuth start
	I1212 23:17:49.914704  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.915021  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.918281  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918661  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.918715  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918849  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.921184  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921566  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.921603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921732  127760 provision.go:138] copyHostCerts
	I1212 23:17:49.921811  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:49.921824  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:49.921891  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:49.922013  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:49.922030  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:49.922061  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:49.922139  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:49.922149  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:49.922174  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:49.922255  127760 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-809120 san=[192.168.50.221 192.168.50.221 localhost 127.0.0.1 minikube embed-certs-809120]
	I1212 23:17:50.309293  127760 provision.go:172] copyRemoteCerts
	I1212 23:17:50.309361  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:50.309389  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.312319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.312745  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312942  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.313157  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.313362  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.313554  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.401075  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:50.426930  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 23:17:50.452785  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:50.480062  127760 provision.go:86] duration metric: configureAuth took 565.356144ms
	I1212 23:17:50.480098  127760 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:50.480377  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:50.480523  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.483621  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484035  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.484091  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484244  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.484455  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484603  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484728  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.484903  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.485264  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.485282  127760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:50.842779  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:50.842815  127760 machine.go:91] provisioned docker machine in 1.202192917s
	I1212 23:17:50.842831  127760 start.go:300] post-start starting for "embed-certs-809120" (driver="kvm2")
	I1212 23:17:50.842846  127760 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:50.842882  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:50.843282  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:50.843318  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.846267  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846670  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.846704  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846881  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.847102  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.847322  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.847496  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.934904  127760 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:50.939875  127760 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:50.939912  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:50.940000  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:50.940130  127760 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:50.940242  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:50.950764  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:50.977204  127760 start.go:303] post-start completed in 134.34972ms
	I1212 23:17:50.977232  127760 fix.go:56] fixHost completed within 23.540815255s
	I1212 23:17:50.977256  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.980553  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981029  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.981065  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981350  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.981611  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981766  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981917  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.982111  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.982448  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.982467  127760 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:51.096273  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423071.035304579
	
	I1212 23:17:51.096303  127760 fix.go:206] guest clock: 1702423071.035304579
	I1212 23:17:51.096311  127760 fix.go:219] Guest: 2023-12-12 23:17:51.035304579 +0000 UTC Remote: 2023-12-12 23:17:50.977236465 +0000 UTC m=+368.149225502 (delta=58.068114ms)
	I1212 23:17:51.096365  127760 fix.go:190] guest clock delta is within tolerance: 58.068114ms
	I1212 23:17:51.096375  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 23.659994787s
	I1212 23:17:51.096401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.096676  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:51.099275  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099683  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.099714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099864  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100586  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100671  127760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:51.100713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.100833  127760 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:51.100859  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.103808  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104103  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104214  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104268  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104379  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104415  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104405  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104615  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104620  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104817  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.104999  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.105058  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.105220  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.214734  127760 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:51.221556  127760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:51.379699  127760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:51.386319  127760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:51.386411  127760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:51.406594  127760 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:51.406623  127760 start.go:475] detecting cgroup driver to use...
	I1212 23:17:51.406707  127760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:51.421646  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:51.439574  127760 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:51.439651  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:51.456389  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:51.469380  127760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:51.576093  127760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:51.711468  127760 docker.go:219] disabling docker service ...
	I1212 23:17:51.711548  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:51.726747  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:51.739661  127760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:51.852974  127760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:51.973603  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:51.986983  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:52.004739  127760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:52.004809  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.017255  127760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:52.017345  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.029275  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.040398  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.051671  127760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:52.062036  127760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:52.070879  127760 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:52.070958  127760 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:52.087878  127760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:52.099487  127760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:52.246621  127760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:52.445182  127760 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:52.445259  127760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:52.450378  127760 start.go:543] Will wait 60s for crictl version
	I1212 23:17:52.450458  127760 ssh_runner.go:195] Run: which crictl
	I1212 23:17:52.454778  127760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:52.497569  127760 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:52.497679  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.562042  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.622289  127760 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:52.623892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:52.626997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627438  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:52.627474  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627731  127760 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:52.633387  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:52.647682  127760 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:52.647763  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:52.691061  127760 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:52.691138  127760 ssh_runner.go:195] Run: which lz4
	I1212 23:17:52.695575  127760 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:52.701228  127760 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:52.701265  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:53.042479  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.042516  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.042532  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.134475  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.134511  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.404943  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.413791  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.413829  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:53.904341  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.916515  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.916564  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:54.404229  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:54.414091  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:17:54.428577  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:17:54.428615  128282 api_server.go:131] duration metric: took 5.525437271s to wait for apiserver health ...
	I1212 23:17:54.428628  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:54.428638  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:54.430838  128282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:50.589742  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.593395  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:54.432405  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:54.450147  128282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:54.496866  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:54.519276  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:54.519327  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:17:54.519339  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:17:54.519354  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:17:54.519405  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:17:54.519418  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:17:54.519434  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:17:54.519447  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:54.519484  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:17:54.519498  128282 system_pods.go:74] duration metric: took 22.603103ms to wait for pod list to return data ...
	I1212 23:17:54.519512  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:54.526046  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:54.526083  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:54.526098  128282 node_conditions.go:105] duration metric: took 6.575834ms to run NodePressure ...
	I1212 23:17:54.526127  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:54.979886  128282 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991132  128282 kubeadm.go:787] kubelet initialised
	I1212 23:17:54.991169  128282 kubeadm.go:788] duration metric: took 11.248765ms waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991185  128282 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:54.999550  128282 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.008465  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008494  128282 pod_ready.go:81] duration metric: took 8.904589ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.008508  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008516  128282 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.020120  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020152  128282 pod_ready.go:81] duration metric: took 11.625987ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.020164  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020191  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.030018  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030056  128282 pod_ready.go:81] duration metric: took 9.856172ms waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.030074  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030083  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.039957  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.039997  128282 pod_ready.go:81] duration metric: took 9.902972ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.040015  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.040025  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.384922  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384964  128282 pod_ready.go:81] duration metric: took 344.925878ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.384979  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384988  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.791268  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791307  128282 pod_ready.go:81] duration metric: took 406.306307ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.791323  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791335  128282 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:56.186386  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186484  128282 pod_ready.go:81] duration metric: took 395.136012ms waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:56.186514  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186553  128282 pod_ready.go:38] duration metric: took 1.195355612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:56.186577  128282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:56.201434  128282 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:56.201462  128282 kubeadm.go:640] restartCluster took 21.273148264s
	I1212 23:17:56.201473  128282 kubeadm.go:406] StartCluster complete in 21.325115034s
	I1212 23:17:56.201496  128282 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.201592  128282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:56.204683  128282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.205095  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:56.205222  128282 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:56.205300  128282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205321  128282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205330  128282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205346  128282 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205361  128282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850839"
	W1212 23:17:56.205363  128282 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:56.205324  128282 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205445  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205360  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 23:17:56.205501  128282 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:56.205595  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205832  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205855  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205918  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205939  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205978  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.206077  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.215695  128282 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850839" context rescaled to 1 replicas
	I1212 23:17:56.215745  128282 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:56.219003  128282 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:56.221363  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.223684  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I1212 23:17:56.223901  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I1212 23:17:56.224018  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I1212 23:17:56.224530  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.224610  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225015  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225250  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.225260  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.225597  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.225990  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.226015  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.226308  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.226318  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.227368  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.227535  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.229799  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.229817  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.230427  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.232575  128282 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-850839"
	W1212 23:17:56.232593  128282 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:56.232623  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.233075  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233110  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.233880  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233930  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.245636  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1212 23:17:56.246119  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.246606  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.246623  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.246950  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.247098  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.248959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.251159  128282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:56.249918  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I1212 23:17:56.251294  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I1212 23:17:56.252768  128282 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.252783  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:56.252798  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.253647  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.253753  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.254219  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254233  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254340  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254347  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254690  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254749  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.255310  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.255335  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.256017  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256612  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.256639  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.257003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.257189  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.257402  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.258242  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.260097  128282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:54.115994  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:55.606824  127900 pod_ready.go:92] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.606858  127900 pod_ready.go:81] duration metric: took 34.03725266s waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.606872  127900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619163  127900 pod_ready.go:92] pod "etcd-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.619197  127900 pod_ready.go:81] duration metric: took 12.316097ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619212  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627282  127900 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.627313  127900 pod_ready.go:81] duration metric: took 8.08913ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627328  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634928  127900 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.634962  127900 pod_ready.go:81] duration metric: took 7.625067ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634978  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644531  127900 pod_ready.go:92] pod "kube-proxy-b6lz6" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.644558  127900 pod_ready.go:81] duration metric: took 9.571853ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644572  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985318  127900 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.985350  127900 pod_ready.go:81] duration metric: took 340.769789ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985366  127900 pod_ready.go:38] duration metric: took 34.420989087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:55.985382  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:55.985443  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:56.008913  127900 api_server.go:72] duration metric: took 42.305439195s to wait for apiserver process to appear ...
	I1212 23:17:56.009000  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:56.009030  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:56.017005  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:56.018170  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:56.018198  127900 api_server.go:131] duration metric: took 9.18267ms to wait for apiserver health ...
	I1212 23:17:56.018209  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:56.189360  127900 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:56.189394  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.189401  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.189408  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.189415  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.189421  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.189428  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.189437  127900 system_pods.go:61] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.189447  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.189462  127900 system_pods.go:74] duration metric: took 171.24435ms to wait for pod list to return data ...
	I1212 23:17:56.189477  127900 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:17:56.386180  127900 default_sa.go:45] found service account: "default"
	I1212 23:17:56.386211  127900 default_sa.go:55] duration metric: took 196.72345ms for default service account to be created ...
	I1212 23:17:56.386223  127900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:17:56.591313  127900 system_pods.go:86] 8 kube-system pods found
	I1212 23:17:56.591345  127900 system_pods.go:89] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.591354  127900 system_pods.go:89] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.591361  127900 system_pods.go:89] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.591369  127900 system_pods.go:89] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.591375  127900 system_pods.go:89] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.591382  127900 system_pods.go:89] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.591393  127900 system_pods.go:89] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.591401  127900 system_pods.go:89] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.591414  127900 system_pods.go:126] duration metric: took 205.183283ms to wait for k8s-apps to be running ...
	I1212 23:17:56.591429  127900 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:17:56.591482  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.611938  127900 system_svc.go:56] duration metric: took 20.493956ms WaitForService to wait for kubelet.
	I1212 23:17:56.611982  127900 kubeadm.go:581] duration metric: took 42.908516938s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:17:56.612014  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:56.785799  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:56.785841  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:56.785856  127900 node_conditions.go:105] duration metric: took 173.834506ms to run NodePressure ...
	I1212 23:17:56.785874  127900 start.go:228] waiting for startup goroutines ...
	I1212 23:17:56.785883  127900 start.go:233] waiting for cluster config update ...
	I1212 23:17:56.785898  127900 start.go:242] writing updated cluster config ...
	I1212 23:17:56.786402  127900 ssh_runner.go:195] Run: rm -f paused
	I1212 23:17:56.860461  127900 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 23:17:56.862646  127900 out.go:177] 
	W1212 23:17:56.864213  127900 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 23:17:56.865656  127900 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 23:17:56.867482  127900 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-549640" cluster and "default" namespace by default
	I1212 23:17:54.719978  127760 crio.go:444] Took 2.024442 seconds to copy over tarball
	I1212 23:17:54.720063  127760 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:56.261553  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:56.261577  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:56.261599  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.269093  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269478  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.269501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269778  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.269969  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.270192  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.270348  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.273173  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I1212 23:17:56.273551  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.274146  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.274170  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.274479  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.274657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.276224  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.276536  128282 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.276553  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:56.276572  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.279571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.279991  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.280030  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.280183  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.280395  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.280562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.280708  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.399444  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.447026  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:56.447058  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:56.453920  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.474280  128282 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:56.474316  128282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:17:56.509564  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:56.509598  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:56.575180  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:56.575217  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:56.641478  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:58.298873  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.89938362s)
	I1212 23:17:58.298942  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.298948  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.844991558s)
	I1212 23:17:58.298957  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.298986  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299063  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299326  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299356  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299367  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299387  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299439  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299448  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299463  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299479  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299489  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299673  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299690  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299850  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299879  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299899  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.308876  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.308905  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.309195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.309232  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.309241  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.418788  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.777244462s)
	I1212 23:17:58.418849  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.418866  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.419251  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.419285  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.419297  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.419308  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.420803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.420837  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.420857  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.420876  128282 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:58.591048  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:58.635345  128282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:17:54.595102  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:57.089235  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:58.815643  128282 addons.go:502] enable addons completed in 2.610454188s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:17:58.247448  127760 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.527350021s)
	I1212 23:17:58.247482  127760 crio.go:451] Took 3.527472 seconds to extract the tarball
	I1212 23:17:58.247500  127760 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:58.292239  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:58.347669  127760 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:58.347700  127760 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:58.347774  127760 ssh_runner.go:195] Run: crio config
	I1212 23:17:58.410577  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:17:58.410604  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:58.410627  127760 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:58.410658  127760 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-809120 NodeName:embed-certs-809120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:58.410874  127760 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-809120"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:58.410973  127760 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-809120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:58.411040  127760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:58.422571  127760 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:58.422655  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:58.432833  127760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:17:58.449996  127760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:58.468807  127760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 23:17:58.487568  127760 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:58.492547  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:58.505497  127760 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120 for IP: 192.168.50.221
	I1212 23:17:58.505548  127760 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:58.505759  127760 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:58.505820  127760 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:58.505891  127760 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/client.key
	I1212 23:17:58.585996  127760 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key.edab0817
	I1212 23:17:58.586114  127760 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key
	I1212 23:17:58.586288  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:58.586319  127760 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:58.586330  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:58.586356  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:58.586381  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:58.586418  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:58.586483  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:58.587254  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:58.615215  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:58.644237  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:58.670345  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:58.694986  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:58.719944  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:58.744701  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:58.768614  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:58.792922  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:58.815723  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:58.840192  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:58.864277  127760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:58.883069  127760 ssh_runner.go:195] Run: openssl version
	I1212 23:17:58.889642  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:58.901893  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906910  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906964  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.912769  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:58.924171  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:58.937368  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942604  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942681  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.948759  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:58.959757  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:58.971091  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976035  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976105  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.982246  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:58.994786  127760 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:58.999625  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:59.006233  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:59.012668  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:59.018959  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:59.025039  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:59.031628  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:59.037633  127760 kubeadm.go:404] StartCluster: {Name:embed-certs-809120 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:59.037779  127760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:59.037837  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:59.078977  127760 cri.go:89] found id: ""
	I1212 23:17:59.079065  127760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:59.090869  127760 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:59.090893  127760 kubeadm.go:636] restartCluster start
	I1212 23:17:59.090957  127760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:59.101950  127760 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.103088  127760 kubeconfig.go:92] found "embed-certs-809120" server: "https://192.168.50.221:8443"
	I1212 23:17:59.105562  127760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:59.115942  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.116006  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.128428  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.128452  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.128508  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.141075  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.641778  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.641854  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.654519  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.142171  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.142275  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.157160  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.641601  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.641719  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.654666  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.141184  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.141289  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.154899  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.641381  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.641501  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.654663  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.141186  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.141311  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.154140  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.642051  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.642157  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.655013  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.586733  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.588383  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:03.588956  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.092631  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:03.591508  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:04.090728  128282 node_ready.go:49] node "default-k8s-diff-port-850839" has status "Ready":"True"
	I1212 23:18:04.090757  128282 node_ready.go:38] duration metric: took 7.616412902s waiting for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:18:04.090775  128282 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:04.099347  128282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107155  128282 pod_ready.go:92] pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.107180  128282 pod_ready.go:81] duration metric: took 7.807715ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107192  128282 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113524  128282 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.113547  128282 pod_ready.go:81] duration metric: took 6.348789ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113557  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:03.141560  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.141654  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.156399  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:03.642066  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.642159  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.657347  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.141755  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.141837  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.158471  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.641645  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.641754  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.655061  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.141603  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.141699  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.154832  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.641246  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.641321  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.658753  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.141224  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.141299  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.156055  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.641506  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.641593  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.654083  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.141490  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.141570  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.154699  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.641257  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.641336  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.653935  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.590423  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.088212  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:06.134727  128282 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:07.136828  128282 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.136854  128282 pod_ready.go:81] duration metric: took 3.023290043s waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.136866  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151525  128282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.151554  128282 pod_ready.go:81] duration metric: took 14.680003ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151570  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293823  128282 pod_ready.go:92] pod "kube-proxy-wjrjj" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.293853  128282 pod_ready.go:81] duration metric: took 142.276185ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293864  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690262  128282 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.690291  128282 pod_ready.go:81] duration metric: took 396.420266ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690311  128282 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:10.001790  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.141984  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.142065  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.154365  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:08.641957  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.642070  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.654449  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:09.117052  127760 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:18:09.117093  127760 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:18:09.117131  127760 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:18:09.117195  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:18:09.165861  127760 cri.go:89] found id: ""
	I1212 23:18:09.165944  127760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:18:09.183729  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:18:09.194407  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:18:09.194487  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204575  127760 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204609  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:09.333758  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.380332  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04653446s)
	I1212 23:18:10.380364  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.603185  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.692919  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.776099  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:18:10.776189  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.795881  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.310083  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.809948  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.309977  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.810420  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.089789  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.589345  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:14.002715  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:13.310509  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:13.336361  127760 api_server.go:72] duration metric: took 2.560264825s to wait for apiserver process to appear ...
	I1212 23:18:13.336391  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:18:13.336411  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.319120  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.319159  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.319177  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.400337  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.400373  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.900625  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.906178  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:17.906233  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.401353  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.407217  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:18.407262  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.901435  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.913756  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:18:18.922517  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:18:18.922545  127760 api_server.go:131] duration metric: took 5.586147801s to wait for apiserver health ...
	I1212 23:18:18.922556  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:18:18.922563  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:18:18.924845  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:18:15.088538  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:17.587744  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:16.503957  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.002214  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:18.926570  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:18:18.976384  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:18:19.009915  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:18:19.035935  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:18:19.035986  127760 system_pods.go:61] "coredns-5dd5756b68-bz6cz" [4f53d6a6-c877-4f76-8aca-06ee891d9652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:18:19.035996  127760 system_pods.go:61] "etcd-embed-certs-809120" [260387de-7507-4962-b2fd-90cd6b39cae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:18:19.036005  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [94ded414-9813-4d0e-8de4-8ad5f6c16a33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:18:19.036017  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [c6574dde-8281-4dd2-bacd-c0412f1f592c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:18:19.036028  127760 system_pods.go:61] "kube-proxy-h7zgl" [87ca2a99-1da7-4a50-b4c7-f160cddf9ff3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:18:19.036042  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [fc6d3a5c-4056-47f8-9156-f5d370ba1de6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:18:19.036053  127760 system_pods.go:61] "metrics-server-57f55c9bc5-mxsd2" [d519663c-7921-4fc9-8d0f-ecf6d3cdbd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:18:19.036071  127760 system_pods.go:61] "storage-provisioner" [900e5cb9-7d27-4446-b15d-21f67fa3b629] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:18:19.036081  127760 system_pods.go:74] duration metric: took 26.13268ms to wait for pod list to return data ...
	I1212 23:18:19.036093  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:18:19.045885  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:18:19.045930  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:18:19.045945  127760 node_conditions.go:105] duration metric: took 9.842707ms to run NodePressure ...
	I1212 23:18:19.045969  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:19.587096  127760 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593698  127760 kubeadm.go:787] kubelet initialised
	I1212 23:18:19.593722  127760 kubeadm.go:788] duration metric: took 6.595854ms waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593730  127760 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:19.602567  127760 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:21.623798  127760 pod_ready.go:102] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.590788  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:22.089448  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:24.090497  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:21.501964  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.502814  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:26.000629  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.124864  127760 pod_ready.go:92] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:23.124888  127760 pod_ready.go:81] duration metric: took 3.52228673s waiting for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:23.124898  127760 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:25.143967  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.146069  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.645645  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.645671  127760 pod_ready.go:81] duration metric: took 4.520766787s waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.645686  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652369  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.652392  127760 pod_ready.go:81] duration metric: took 6.700076ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652402  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587478  128156 pod_ready.go:92] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.587505  128156 pod_ready.go:81] duration metric: took 40.035726456s waiting for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587518  128156 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.596994  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.597015  128156 pod_ready.go:81] duration metric: took 9.490538ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.597027  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601904  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.601930  128156 pod_ready.go:81] duration metric: took 4.894855ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601942  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608643  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.608662  128156 pod_ready.go:81] duration metric: took 6.712079ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608673  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614595  128156 pod_ready.go:92] pod "kube-proxy-rqhmc" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.614624  128156 pod_ready.go:81] duration metric: took 5.945157ms waiting for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614632  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985244  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.985272  128156 pod_ready.go:81] duration metric: took 370.631498ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985282  128156 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.293707  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.293859  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:28.500792  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:31.002513  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.676207  127760 pod_ready.go:102] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:32.172306  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.172339  127760 pod_ready.go:81] duration metric: took 4.519929269s waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.172355  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178133  127760 pod_ready.go:92] pod "kube-proxy-h7zgl" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.178154  127760 pod_ready.go:81] duration metric: took 5.793304ms waiting for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178163  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184283  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.184305  127760 pod_ready.go:81] duration metric: took 6.134863ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184319  127760 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:31.792415  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.793837  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.499687  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:35.500853  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:34.448290  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.948646  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.296844  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.793406  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:40.501951  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.949791  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.448832  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.294594  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.295134  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.000673  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.000747  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.452098  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.947475  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.793152  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.793282  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.003229  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.499682  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.949034  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:50.449118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.455176  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.793896  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.293413  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.293611  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:51.502870  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.000866  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.002047  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.948058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.950946  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.791908  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.792808  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.500328  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.000549  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:59.449089  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.948622  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:00.793090  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.294337  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.002131  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.500315  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.948920  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.949566  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.792376  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.793999  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:08.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.500002  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.950271  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.450074  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.292457  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.294375  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.503977  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:15.000631  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.948486  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.951220  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.448916  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.792888  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:16.793429  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.293010  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.000916  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.499770  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.449088  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.949856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.293433  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.792996  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.506787  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.507411  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:26.001279  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.950269  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.952818  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.793527  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.294892  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.499823  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.500142  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.448303  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.449512  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.793364  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.293202  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.001883  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.500561  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:32.948419  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:34.948716  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:36.949202  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.293744  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:37.294070  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:38.001116  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:40.001502  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.449215  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:41.948577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.793176  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.292783  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.501401  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:45.003364  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:43.950039  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.449043  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:44.792361  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.793184  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.294980  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:47.500147  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.501096  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:48.449912  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:50.950549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:51.794547  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.298465  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.000382  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.005736  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.950635  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:55.449330  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:57.449700  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.792615  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.499865  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:58.499980  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:00.500389  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.950151  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:02.447970  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:01.793306  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.793698  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.001300  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.499370  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:04.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:06.450549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.793804  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.793899  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.500520  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.000481  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:08.950058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:11.449345  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.293157  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.293642  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.500064  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.500937  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:13.949163  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:16.448489  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.793066  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.293467  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.293785  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.003921  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.501044  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:18.953218  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.449082  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.792447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.794479  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.999979  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:24.001269  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.001308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.948517  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:25.949879  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.292488  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.293405  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.499717  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.500472  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.448633  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.455346  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.293436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.296063  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:33.004484  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:35.500190  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.949307  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.949549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.447994  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.792727  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.292297  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.293185  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.501094  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:40.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.448914  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.449574  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.296498  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.794079  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:42.000667  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:44.500084  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.949370  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.448365  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.293571  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.795374  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.501287  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:49.000247  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.002102  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.449326  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:50.950049  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.295712  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.796436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.500278  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.500483  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:52.950509  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.448194  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:57.448444  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:56.293432  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.791909  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.000148  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.000718  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:59.448627  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:01.449178  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.793652  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.798916  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.501103  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:04.504053  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:03.948376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.949118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.293868  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.796468  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.000140  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:09.500040  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.949917  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.449692  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.296954  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.793159  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:11.500724  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:13.501811  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:16.000506  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.948932  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:14.951174  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.448985  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:15.294394  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.792822  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:18.501242  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.000679  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:19.449857  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.949137  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:20.293991  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:22.793476  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.501237  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.001069  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.950208  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.449036  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:25.294562  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:27.792099  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.500763  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.000635  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.947918  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:30.949180  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:29.793559  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.793709  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:34.292407  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:33.001948  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.002761  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:32.949352  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.448233  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.449470  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:36.292723  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:38.792944  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.501308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.001944  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:39.948613  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:41.953252  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.793938  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.796054  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.499956  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.504598  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.453963  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.952856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:45.292988  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:47.792829  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.999714  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.000749  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.000798  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.448592  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.461405  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.793084  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:52.293550  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.001475  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:55.499894  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.952376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.451000  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:54.793373  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.796557  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:59.293830  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:57.501136  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.000501  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:58.949246  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.949331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:01.792604  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.793283  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:02.501611  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.001210  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.449006  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.449356  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:06.291970  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:08.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.502381  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.690392  128282 pod_ready.go:81] duration metric: took 4m0.000056495s waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:07.690437  128282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:07.690447  128282 pod_ready.go:38] duration metric: took 4m3.599656754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:07.690468  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:22:07.690503  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:07.690560  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:07.752216  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:07.752249  128282 cri.go:89] found id: ""
	I1212 23:22:07.752258  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:07.752309  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.757000  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:07.757068  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:07.801367  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:07.801398  128282 cri.go:89] found id: ""
	I1212 23:22:07.801409  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:07.801470  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.806744  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:07.806804  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:07.850495  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:07.850530  128282 cri.go:89] found id: ""
	I1212 23:22:07.850538  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:07.850588  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.855144  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:07.855226  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:07.900092  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:07.900121  128282 cri.go:89] found id: ""
	I1212 23:22:07.900131  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:07.900199  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.904280  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:07.904357  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:07.945991  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:07.946019  128282 cri.go:89] found id: ""
	I1212 23:22:07.946034  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:07.946101  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.951095  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:07.951168  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:07.992586  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:07.992611  128282 cri.go:89] found id: ""
	I1212 23:22:07.992619  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:07.992667  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.996887  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:07.996945  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:08.038769  128282 cri.go:89] found id: ""
	I1212 23:22:08.038810  128282 logs.go:284] 0 containers: []
	W1212 23:22:08.038820  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:08.038829  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:08.038892  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:08.081167  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.081202  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.081209  128282 cri.go:89] found id: ""
	I1212 23:22:08.081225  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:08.081282  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.085740  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.089816  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:08.089836  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:08.137243  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:08.137274  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:08.180654  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:08.180686  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:08.240646  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:08.240684  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:08.289713  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:08.289753  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:08.440863  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:08.440902  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:08.505477  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:08.505516  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.561373  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:08.561411  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:08.626446  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:08.626482  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:08.681726  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:08.681769  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:08.703440  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:08.703468  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.739960  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:08.739998  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:09.213821  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:09.213867  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:07.949577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:09.950086  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.449579  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:10.793412  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.794447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:11.771447  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:22:11.787326  128282 api_server.go:72] duration metric: took 4m15.571529815s to wait for apiserver process to appear ...
	I1212 23:22:11.787355  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:22:11.787395  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:11.787459  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:11.841146  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:11.841178  128282 cri.go:89] found id: ""
	I1212 23:22:11.841199  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:11.841263  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.845844  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:11.845917  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:11.895757  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:11.895780  128282 cri.go:89] found id: ""
	I1212 23:22:11.895789  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:11.895846  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.900575  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:11.900641  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:11.941848  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:11.941872  128282 cri.go:89] found id: ""
	I1212 23:22:11.941882  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:11.941962  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.948119  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:11.948192  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:11.997102  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:11.997126  128282 cri.go:89] found id: ""
	I1212 23:22:11.997135  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:11.997189  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.002683  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:12.002750  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:12.042120  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:12.042144  128282 cri.go:89] found id: ""
	I1212 23:22:12.042159  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:12.042225  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.047068  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:12.047144  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:12.092055  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:12.092078  128282 cri.go:89] found id: ""
	I1212 23:22:12.092087  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:12.092137  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.097642  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:12.097713  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:12.137481  128282 cri.go:89] found id: ""
	I1212 23:22:12.137521  128282 logs.go:284] 0 containers: []
	W1212 23:22:12.137532  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:12.137542  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:12.137607  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:12.183712  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:12.183735  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.183740  128282 cri.go:89] found id: ""
	I1212 23:22:12.183747  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:12.183813  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.188656  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.193613  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:12.193639  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:12.206911  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:12.206941  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:12.258294  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:12.258335  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.300901  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:12.300934  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:12.765702  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:12.765746  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:12.909101  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:12.909138  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:12.967049  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:12.967083  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:13.010895  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:13.010930  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:13.062291  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:13.062324  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:13.107276  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:13.107320  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:13.166395  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:13.166448  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:13.212812  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:13.212853  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:13.260977  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:13.261022  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:15.816287  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:22:15.821554  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:22:15.822925  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:22:15.822945  128282 api_server.go:131] duration metric: took 4.035583432s to wait for apiserver health ...
	I1212 23:22:15.822954  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:22:15.822976  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:15.823024  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:15.870940  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:15.870981  128282 cri.go:89] found id: ""
	I1212 23:22:15.870993  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:15.871062  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.876167  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:15.876244  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:15.916642  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:15.916671  128282 cri.go:89] found id: ""
	I1212 23:22:15.916682  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:15.916747  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.921173  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:15.921238  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:15.963421  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:15.963449  128282 cri.go:89] found id: ""
	I1212 23:22:15.963461  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:15.963521  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.967747  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:15.967821  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:14.949925  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.949999  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:15.294181  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:17.793324  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.011046  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.011071  128282 cri.go:89] found id: ""
	I1212 23:22:16.011079  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:16.011128  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.015592  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:16.015659  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:16.058065  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:16.058092  128282 cri.go:89] found id: ""
	I1212 23:22:16.058103  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:16.058157  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.062334  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:16.062398  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:16.105032  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:16.105062  128282 cri.go:89] found id: ""
	I1212 23:22:16.105074  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:16.105140  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.109674  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:16.109728  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:16.151188  128282 cri.go:89] found id: ""
	I1212 23:22:16.151221  128282 logs.go:284] 0 containers: []
	W1212 23:22:16.151230  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:16.151246  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:16.151314  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:16.196149  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:16.196191  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.196199  128282 cri.go:89] found id: ""
	I1212 23:22:16.196209  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:16.196272  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.201690  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.205939  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:16.205970  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:16.358186  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:16.358236  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:16.404737  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:16.404780  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.449040  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:16.449069  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.491141  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:16.491173  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:16.860522  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:16.860578  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:16.877982  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:16.878030  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:16.923301  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:16.923338  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:16.965351  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:16.965382  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:17.024559  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:17.024603  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:17.079193  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:17.079229  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:17.123956  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:17.124003  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:17.202000  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:17.202043  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:19.755866  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:22:19.755901  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.755907  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.755914  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.755922  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.755929  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.755936  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.755946  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.755954  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.755963  128282 system_pods.go:74] duration metric: took 3.933003633s to wait for pod list to return data ...
	I1212 23:22:19.755977  128282 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:22:19.758618  128282 default_sa.go:45] found service account: "default"
	I1212 23:22:19.758639  128282 default_sa.go:55] duration metric: took 2.655294ms for default service account to be created ...
	I1212 23:22:19.758647  128282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:22:19.764376  128282 system_pods.go:86] 8 kube-system pods found
	I1212 23:22:19.764398  128282 system_pods.go:89] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.764404  128282 system_pods.go:89] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.764409  128282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.764414  128282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.764418  128282 system_pods.go:89] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.764432  128282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.764444  128282 system_pods.go:89] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.764454  128282 system_pods.go:89] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.764464  128282 system_pods.go:126] duration metric: took 5.811076ms to wait for k8s-apps to be running ...
	I1212 23:22:19.764475  128282 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:22:19.764531  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:19.781048  128282 system_svc.go:56] duration metric: took 16.561836ms WaitForService to wait for kubelet.
	I1212 23:22:19.781100  128282 kubeadm.go:581] duration metric: took 4m23.565309829s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:22:19.781129  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:22:19.784205  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:22:19.784229  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:22:19.784240  128282 node_conditions.go:105] duration metric: took 3.105926ms to run NodePressure ...
	I1212 23:22:19.784253  128282 start.go:228] waiting for startup goroutines ...
	I1212 23:22:19.784259  128282 start.go:233] waiting for cluster config update ...
	I1212 23:22:19.784269  128282 start.go:242] writing updated cluster config ...
	I1212 23:22:19.784545  128282 ssh_runner.go:195] Run: rm -f paused
	I1212 23:22:19.840938  128282 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:22:19.842885  128282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850839" cluster and "default" namespace by default
	I1212 23:22:19.449331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:21.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:20.294156  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:22.792746  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:23.949834  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:26.452555  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.793601  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.985518  128156 pod_ready.go:81] duration metric: took 4m0.000203674s waiting for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:24.985551  128156 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:24.985571  128156 pod_ready.go:38] duration metric: took 4m40.456239368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:24.985600  128156 kubeadm.go:640] restartCluster took 5m2.616770336s
	W1212 23:22:24.985660  128156 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:24.985690  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:28.949293  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:31.449689  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:32.184476  127760 pod_ready.go:81] duration metric: took 4m0.000136331s waiting for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:32.184516  127760 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:32.184559  127760 pod_ready.go:38] duration metric: took 4m12.59080567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:32.184598  127760 kubeadm.go:640] restartCluster took 4m33.093698567s
	W1212 23:22:32.184674  127760 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:32.184715  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:39.117782  128156 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.132057077s)
	I1212 23:22:39.117868  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:39.132912  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:39.143453  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:39.153628  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:39.153684  128156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:39.374201  128156 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:46.310264  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.12551082s)
	I1212 23:22:46.310350  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:46.327577  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:46.339177  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:46.350355  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:46.350407  127760 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:46.414859  127760 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:22:46.414971  127760 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:46.599881  127760 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:46.600039  127760 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:46.600208  127760 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:46.867542  127760 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:46.869398  127760 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:46.869528  127760 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:46.869659  127760 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:46.869770  127760 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:46.869933  127760 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:46.870496  127760 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:46.871021  127760 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:46.871802  127760 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:46.873187  127760 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:46.874737  127760 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:46.876316  127760 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:46.877713  127760 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:46.877769  127760 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:47.211156  127760 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:47.370652  127760 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:47.491927  127760 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:47.746007  127760 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:47.746996  127760 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:47.749868  127760 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:47.751553  127760 out.go:204]   - Booting up control plane ...
	I1212 23:22:47.751724  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:47.751814  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:47.752662  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:47.770296  127760 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:47.770438  127760 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:47.770546  127760 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.362262  128156 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 23:22:51.362341  128156 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:51.362461  128156 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:51.362593  128156 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:51.362706  128156 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:51.362781  128156 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:51.364439  128156 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:51.364561  128156 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:51.364660  128156 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:51.364758  128156 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:51.364840  128156 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:51.364971  128156 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:51.365060  128156 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:51.365137  128156 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:51.365215  128156 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:51.365320  128156 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:51.365425  128156 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:51.365479  128156 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:51.365553  128156 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:51.365626  128156 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:51.365706  128156 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 23:22:51.365778  128156 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:51.365859  128156 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:51.365936  128156 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:51.366046  128156 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:51.366131  128156 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:51.368190  128156 out.go:204]   - Booting up control plane ...
	I1212 23:22:51.368316  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:51.368421  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:51.368517  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:51.368649  128156 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:51.368763  128156 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:51.368813  128156 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.369013  128156 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.369107  128156 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503652 seconds
	I1212 23:22:51.369231  128156 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:51.369390  128156 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:51.369465  128156 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:51.369709  128156 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-115023 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:51.369780  128156 kubeadm.go:322] [bootstrap-token] Using token: agyzoj.wkr94b17dt19k7yx
	I1212 23:22:51.371110  128156 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:51.371306  128156 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:51.371421  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:51.371643  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:51.371825  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:51.371975  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:51.372085  128156 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:51.372226  128156 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:51.372285  128156 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:51.372344  128156 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:51.372353  128156 kubeadm.go:322] 
	I1212 23:22:51.372425  128156 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:51.372437  128156 kubeadm.go:322] 
	I1212 23:22:51.372529  128156 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:51.372540  128156 kubeadm.go:322] 
	I1212 23:22:51.372571  128156 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:51.372645  128156 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:51.372711  128156 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:51.372720  128156 kubeadm.go:322] 
	I1212 23:22:51.372793  128156 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:51.372804  128156 kubeadm.go:322] 
	I1212 23:22:51.372861  128156 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:51.372871  128156 kubeadm.go:322] 
	I1212 23:22:51.372933  128156 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:51.373050  128156 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:51.373137  128156 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:51.373149  128156 kubeadm.go:322] 
	I1212 23:22:51.373248  128156 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:51.373345  128156 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:51.373356  128156 kubeadm.go:322] 
	I1212 23:22:51.373456  128156 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373583  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:51.373613  128156 kubeadm.go:322] 	--control-plane 
	I1212 23:22:51.373623  128156 kubeadm.go:322] 
	I1212 23:22:51.373724  128156 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:51.373739  128156 kubeadm.go:322] 
	I1212 23:22:51.373842  128156 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373985  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:51.374006  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:22:51.374015  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:51.375563  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:47.945457  127760 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.376861  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:51.414215  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:51.484549  128156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:51.484635  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.484696  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=no-preload-115023 minikube.k8s.io/updated_at=2023_12_12T23_22_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.564599  128156 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:51.924093  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.026923  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.628483  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.128275  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.628006  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:54.127897  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.450625  127760 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504757 seconds
	I1212 23:22:56.450779  127760 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:56.468441  127760 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:57.003074  127760 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:57.003292  127760 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-809120 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:57.518097  127760 kubeadm.go:322] [bootstrap-token] Using token: ichlu8.wzw1wbhrbc06xbtw
	I1212 23:22:57.519536  127760 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:57.519639  127760 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:57.528652  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:57.538325  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:57.542226  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:57.551395  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:57.556988  127760 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:57.573462  127760 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:57.833933  127760 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:57.949764  127760 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:57.949788  127760 kubeadm.go:322] 
	I1212 23:22:57.949888  127760 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:57.949913  127760 kubeadm.go:322] 
	I1212 23:22:57.950013  127760 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:57.950036  127760 kubeadm.go:322] 
	I1212 23:22:57.950079  127760 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:57.950155  127760 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:57.950228  127760 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:57.950240  127760 kubeadm.go:322] 
	I1212 23:22:57.950301  127760 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:57.950311  127760 kubeadm.go:322] 
	I1212 23:22:57.950375  127760 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:57.950385  127760 kubeadm.go:322] 
	I1212 23:22:57.950468  127760 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:57.950578  127760 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:57.950678  127760 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:57.950702  127760 kubeadm.go:322] 
	I1212 23:22:57.950818  127760 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:57.950916  127760 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:57.950926  127760 kubeadm.go:322] 
	I1212 23:22:57.951054  127760 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951199  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:57.951231  127760 kubeadm.go:322] 	--control-plane 
	I1212 23:22:57.951266  127760 kubeadm.go:322] 
	I1212 23:22:57.951386  127760 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:57.951396  127760 kubeadm.go:322] 
	I1212 23:22:57.951494  127760 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951619  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:57.952303  127760 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:57.952326  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:22:57.952337  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:57.954692  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:54.628965  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.127922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.627980  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.128047  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.628471  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.128456  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.628284  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.128528  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.628480  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.128296  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.955898  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:57.975567  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:58.044612  127760 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:58.044741  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.044746  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=embed-certs-809120 minikube.k8s.io/updated_at=2023_12_12T23_22_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.158788  127760 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:58.375305  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.487117  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.075465  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.575132  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.075781  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.575754  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.075376  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.575524  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.075163  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.574821  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.628475  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.128509  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.628837  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.128959  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.627976  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.128077  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.628493  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.128203  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.628549  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.127987  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.627922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.756882  128156 kubeadm.go:1088] duration metric: took 13.272316322s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:04.756928  128156 kubeadm.go:406] StartCluster complete in 5m42.440524658s
	I1212 23:23:04.756955  128156 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.757069  128156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:04.759734  128156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.760081  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:04.760220  128156 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:04.760311  128156 addons.go:69] Setting storage-provisioner=true in profile "no-preload-115023"
	I1212 23:23:04.760325  128156 addons.go:69] Setting default-storageclass=true in profile "no-preload-115023"
	I1212 23:23:04.760358  128156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-115023"
	I1212 23:23:04.760385  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:23:04.760332  128156 addons.go:231] Setting addon storage-provisioner=true in "no-preload-115023"
	W1212 23:23:04.760426  128156 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:04.760497  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760337  128156 addons.go:69] Setting metrics-server=true in profile "no-preload-115023"
	I1212 23:23:04.760525  128156 addons.go:231] Setting addon metrics-server=true in "no-preload-115023"
	W1212 23:23:04.760538  128156 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:04.760577  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760759  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760787  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.760953  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760986  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760995  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.761010  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.777848  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I1212 23:23:04.778063  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1212 23:23:04.778315  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778479  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778613  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38509
	I1212 23:23:04.778931  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778945  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778952  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.778957  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779020  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.779302  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779561  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.779726  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.779749  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779929  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.779961  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.780516  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.781173  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.781207  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.783399  128156 addons.go:231] Setting addon default-storageclass=true in "no-preload-115023"
	W1212 23:23:04.783422  128156 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:04.783452  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.783871  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.783906  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.797493  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 23:23:04.797741  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I1212 23:23:04.798102  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798132  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798613  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798630  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.798956  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798985  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.799262  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799438  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.799639  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.801934  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.802007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.803861  128156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:04.802341  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I1212 23:23:04.806911  128156 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:04.805759  128156 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:04.806058  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.808825  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:04.808833  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:04.808848  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:04.808856  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.808863  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.809266  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.809281  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.809624  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.810352  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.810381  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.813139  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813629  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.813654  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813828  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813882  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814303  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.814333  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.814148  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814542  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814625  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.814797  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814855  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.814954  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.815127  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.823127  128156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-115023" context rescaled to 1 replicas
	I1212 23:23:04.823174  128156 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:04.824991  128156 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:04.826596  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:04.827821  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I1212 23:23:04.828256  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.828820  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.828845  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.829390  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.829741  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.834167  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.834521  128156 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:04.834539  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:04.834563  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.838055  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838555  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.838587  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838772  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.838964  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.839119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.839284  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.972964  128156 node_ready.go:35] waiting up to 6m0s for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.973014  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:04.998182  128156 node_ready.go:49] node "no-preload-115023" has status "Ready":"True"
	I1212 23:23:04.998214  128156 node_ready.go:38] duration metric: took 25.214785ms waiting for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.998226  128156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:05.012036  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:05.027954  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:05.027977  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:05.063451  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:05.076403  128156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:05.119924  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:05.119957  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:05.216413  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.216443  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:05.285434  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.817542  128156 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:06.316381  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.252894593s)
	I1212 23:23:06.316378  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.304291472s)
	I1212 23:23:06.316446  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316460  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316491  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316509  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316903  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.316959  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.316966  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.316986  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316916  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317010  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.317022  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316995  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317032  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317327  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.317387  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317408  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.318858  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.318881  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.366104  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.366135  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.366427  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.366481  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.366492  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618093  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332604197s)
	I1212 23:23:06.618161  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618183  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618643  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.618665  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618676  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618684  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618845  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620326  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620340  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.620363  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.620384  128156 addons.go:467] Verifying addon metrics-server=true in "no-preload-115023"
	I1212 23:23:06.622226  128156 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:03.075069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.575772  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.074921  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.575481  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.075785  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.575855  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.075276  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.575017  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.075100  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.575342  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.623716  128156 addons.go:502] enable addons completed in 1.863496659s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:07.165490  128156 pod_ready.go:102] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:08.161341  128156 pod_ready.go:92] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.161380  128156 pod_ready.go:81] duration metric: took 3.084948492s waiting for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.161395  128156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169259  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.169294  128156 pod_ready.go:81] duration metric: took 7.890109ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169309  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176068  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.176097  128156 pod_ready.go:81] duration metric: took 6.779109ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176111  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183056  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.183085  128156 pod_ready.go:81] duration metric: took 6.964809ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183099  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066100  128156 pod_ready.go:92] pod "kube-proxy-qs95k" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.066123  128156 pod_ready.go:81] duration metric: took 883.017234ms waiting for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066132  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357841  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.357874  128156 pod_ready.go:81] duration metric: took 291.734639ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357884  128156 pod_ready.go:38] duration metric: took 4.359648281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:09.357904  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:09.357970  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:09.372791  128156 api_server.go:72] duration metric: took 4.549577037s to wait for apiserver process to appear ...
	I1212 23:23:09.372820  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:09.372841  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:23:09.378375  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:23:09.379855  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:23:09.379882  128156 api_server.go:131] duration metric: took 7.054126ms to wait for apiserver health ...
	I1212 23:23:09.379893  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:09.561188  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:09.561216  128156 system_pods.go:61] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.561221  128156 system_pods.go:61] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.561225  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.561229  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.561235  128156 system_pods.go:61] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.561239  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.561245  128156 system_pods.go:61] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.561249  128156 system_pods.go:61] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.561257  128156 system_pods.go:74] duration metric: took 181.358443ms to wait for pod list to return data ...
	I1212 23:23:09.561265  128156 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:09.756864  128156 default_sa.go:45] found service account: "default"
	I1212 23:23:09.756894  128156 default_sa.go:55] duration metric: took 195.622122ms for default service account to be created ...
	I1212 23:23:09.756905  128156 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:09.960670  128156 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:09.960700  128156 system_pods.go:89] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.960705  128156 system_pods.go:89] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.960710  128156 system_pods.go:89] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.960715  128156 system_pods.go:89] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.960719  128156 system_pods.go:89] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.960723  128156 system_pods.go:89] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.960729  128156 system_pods.go:89] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.960735  128156 system_pods.go:89] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.960744  128156 system_pods.go:126] duration metric: took 203.831934ms to wait for k8s-apps to be running ...
	I1212 23:23:09.960754  128156 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:09.960805  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:09.974511  128156 system_svc.go:56] duration metric: took 13.742619ms WaitForService to wait for kubelet.
	I1212 23:23:09.974543  128156 kubeadm.go:581] duration metric: took 5.15133848s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:09.974571  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:10.158679  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:10.158708  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:10.158717  128156 node_conditions.go:105] duration metric: took 184.140544ms to run NodePressure ...
	I1212 23:23:10.158730  128156 start.go:228] waiting for startup goroutines ...
	I1212 23:23:10.158736  128156 start.go:233] waiting for cluster config update ...
	I1212 23:23:10.158746  128156 start.go:242] writing updated cluster config ...
	I1212 23:23:10.158996  128156 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:10.222646  128156 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 23:23:10.224867  128156 out.go:177] * Done! kubectl is now configured to use "no-preload-115023" cluster and "default" namespace by default
	I1212 23:23:08.075026  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:08.574992  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.075693  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.575069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.075713  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.575464  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.075090  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.250257  127760 kubeadm.go:1088] duration metric: took 13.205579442s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:11.250290  127760 kubeadm.go:406] StartCluster complete in 5m12.212668558s
	I1212 23:23:11.250312  127760 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.250409  127760 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:11.253977  127760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.254241  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:11.254250  127760 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:11.254337  127760 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-809120"
	I1212 23:23:11.254351  127760 addons.go:69] Setting default-storageclass=true in profile "embed-certs-809120"
	I1212 23:23:11.254358  127760 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-809120"
	W1212 23:23:11.254366  127760 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:11.254369  127760 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-809120"
	I1212 23:23:11.254422  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254431  127760 addons.go:69] Setting metrics-server=true in profile "embed-certs-809120"
	I1212 23:23:11.254457  127760 addons.go:231] Setting addon metrics-server=true in "embed-certs-809120"
	W1212 23:23:11.254466  127760 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:11.254466  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:23:11.254510  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254798  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254802  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254845  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.254902  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254933  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.255058  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.272689  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I1212 23:23:11.272926  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I1212 23:23:11.273095  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273297  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273444  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I1212 23:23:11.273710  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273722  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.273784  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273935  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273947  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274917  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.274942  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.275403  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.275452  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.275615  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.275776  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.276164  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.276199  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.279953  127760 addons.go:231] Setting addon default-storageclass=true in "embed-certs-809120"
	W1212 23:23:11.279984  127760 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:11.280016  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.280439  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.280488  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.296262  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I1212 23:23:11.296273  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I1212 23:23:11.296731  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.296839  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.297284  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297296  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297304  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297315  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297662  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297722  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297820  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.297867  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1212 23:23:11.297876  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.298202  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.298805  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.298823  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.299106  127760 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-809120" context rescaled to 1 replicas
	I1212 23:23:11.299151  127760 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:11.300876  127760 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:11.299808  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.299838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.299990  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.302374  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:11.303907  127760 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:11.305369  127760 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:11.302872  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.307972  127760 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.307992  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:11.308012  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306693  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:11.308064  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:11.308088  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306729  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.312550  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312826  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.312853  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313337  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.313477  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.313493  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313524  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.313558  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313610  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.313772  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313988  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.314165  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.314287  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.334457  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1212 23:23:11.335025  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.335687  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.335719  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.336130  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.336356  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.338062  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.338356  127760 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.338380  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:11.338407  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.341489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342079  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.342119  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342283  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.342499  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.342642  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.342823  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.562179  127760 node_ready.go:35] waiting up to 6m0s for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.562383  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:11.573888  127760 node_ready.go:49] node "embed-certs-809120" has status "Ready":"True"
	I1212 23:23:11.573909  127760 node_ready.go:38] duration metric: took 11.694074ms waiting for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.573919  127760 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:11.591310  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:11.634553  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.672164  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.681199  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:11.681232  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:11.910291  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:11.910325  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:11.993110  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:11.993135  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:12.043047  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:13.550517  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.988091372s)
	I1212 23:23:13.550558  127760 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:13.642966  127760 pod_ready.go:102] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:14.387226  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.752630931s)
	I1212 23:23:14.387298  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387315  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387321  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.715126034s)
	I1212 23:23:14.387345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387359  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387641  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387663  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387675  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387690  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387776  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387801  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387811  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387819  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.388233  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388247  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388248  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.388285  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388291  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388345  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.426683  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.426713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.427017  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.427030  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.427038  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.477873  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.434777303s)
	I1212 23:23:14.477930  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.477944  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478303  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478321  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.478333  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.478357  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478607  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478622  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478632  127760 addons.go:467] Verifying addon metrics-server=true in "embed-certs-809120"
	I1212 23:23:14.480500  127760 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:14.481900  127760 addons.go:502] enable addons completed in 3.227656537s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:15.629572  127760 pod_ready.go:92] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.629599  127760 pod_ready.go:81] duration metric: took 4.038262674s waiting for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.629608  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.638502  127760 pod_ready.go:97] error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638532  127760 pod_ready.go:81] duration metric: took 8.918039ms waiting for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	E1212 23:23:15.638547  127760 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638556  127760 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647047  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.647075  127760 pod_ready.go:81] duration metric: took 8.510672ms waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647089  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655068  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.655091  127760 pod_ready.go:81] duration metric: took 7.994932ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655100  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664338  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.664386  127760 pod_ready.go:81] duration metric: took 9.26869ms waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664401  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732454  127760 pod_ready.go:92] pod "kube-proxy-4nb6w" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:16.732480  127760 pod_ready.go:81] duration metric: took 1.068071012s waiting for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732489  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022376  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:17.022402  127760 pod_ready.go:81] duration metric: took 289.906446ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022423  127760 pod_ready.go:38] duration metric: took 5.448491831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:17.022445  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:17.022494  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:17.039594  127760 api_server.go:72] duration metric: took 5.740406855s to wait for apiserver process to appear ...
	I1212 23:23:17.039620  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:17.039637  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:23:17.044745  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:23:17.046494  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:23:17.046521  127760 api_server.go:131] duration metric: took 6.894306ms to wait for apiserver health ...
	I1212 23:23:17.046531  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:17.227869  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:17.227899  127760 system_pods.go:61] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.227904  127760 system_pods.go:61] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.227909  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.227913  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.227916  127760 system_pods.go:61] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.227920  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.227927  127760 system_pods.go:61] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.227933  127760 system_pods.go:61] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.227944  127760 system_pods.go:74] duration metric: took 181.405975ms to wait for pod list to return data ...
	I1212 23:23:17.227962  127760 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:17.423151  127760 default_sa.go:45] found service account: "default"
	I1212 23:23:17.423181  127760 default_sa.go:55] duration metric: took 195.20215ms for default service account to be created ...
	I1212 23:23:17.423190  127760 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:17.627077  127760 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:17.627104  127760 system_pods.go:89] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.627109  127760 system_pods.go:89] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.627114  127760 system_pods.go:89] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.627118  127760 system_pods.go:89] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.627124  127760 system_pods.go:89] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.627128  127760 system_pods.go:89] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.627135  127760 system_pods.go:89] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.627139  127760 system_pods.go:89] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.627147  127760 system_pods.go:126] duration metric: took 203.952951ms to wait for k8s-apps to be running ...
	I1212 23:23:17.627155  127760 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:17.627197  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:17.641949  127760 system_svc.go:56] duration metric: took 14.784378ms WaitForService to wait for kubelet.
	I1212 23:23:17.641979  127760 kubeadm.go:581] duration metric: took 6.342797652s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:17.642005  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:17.823169  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:17.823201  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:17.823214  127760 node_conditions.go:105] duration metric: took 181.202017ms to run NodePressure ...
	I1212 23:23:17.823230  127760 start.go:228] waiting for startup goroutines ...
	I1212 23:23:17.823258  127760 start.go:233] waiting for cluster config update ...
	I1212 23:23:17.823276  127760 start.go:242] writing updated cluster config ...
	I1212 23:23:17.823609  127760 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:17.879192  127760 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:23:17.880946  127760 out.go:177] * Done! kubectl is now configured to use "embed-certs-809120" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:17:15 UTC, ends at Tue 2023-12-12 23:31:21 UTC. --
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.530487923Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&PodSandboxMetadata{Name:busybox,Uid:2a7a232d-7be4-46ec-9442-550e77e1037a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423081429202492,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:17:53.417692880Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-nrpzf,Uid:bfe81238-05e0-4f68-8a23-d212eb2a24f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:170242
3081409531210,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:17:53.417664821Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4af5de0e3df01e688d2acedbe821d9b9b23e58ab25c65cf3f84b8970dbca2f9,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-zwzrg,Uid:8b0d823e-df34-45eb-807c-17d8a9178bb8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423077526351271,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-zwzrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0d823e-df34-45eb-807c-17d8a9178bb8,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12
T23:17:53.417691229Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&PodSandboxMetadata{Name:kube-proxy-wjrjj,Uid:fa659f1c-88de-406d-8183-bcac6f529efc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423073798908799,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa659f1c-88de-406d-8183-bcac6f529efc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:17:53.417695971Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0570ec42-4a53-4688-ac93-ee10fc58313d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423073790640746,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-12-12T23:17:53.417733616Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-850839,Uid:f7c2b6fd5a437e6949a9892207f94280,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423066992830282,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c2b6fd5a437e6949a9892207f94280,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f7c2b6fd5a437e6949a9892207f94280,kubernetes.io/config.seen: 2023-12-12T23:17:46.402528616Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&PodSandboxMetadata{Name:kube-scheduler-defaul
t-k8s-diff-port-850839,Uid:30862793aa821efa1cb278f711cf3bca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423066973419366,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30862793aa821efa1cb278f711cf3bca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 30862793aa821efa1cb278f711cf3bca,kubernetes.io/config.seen: 2023-12-12T23:17:46.402529811Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-850839,Uid:13a108b8450f638b4168b3bbc0ad86a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423066962687498,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-def
ault-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a108b8450f638b4168b3bbc0ad86a2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.180:8444,kubernetes.io/config.hash: 13a108b8450f638b4168b3bbc0ad86a2,kubernetes.io/config.seen: 2023-12-12T23:17:46.402527109Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-850839,Uid:ad5cc487748e024b1cc8f6e9d661904b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423066918379985,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.39.180:2379,kubernetes.io/config.hash: ad5cc487748e024b1cc8f6e9d661904b,kubernetes.io/config.seen: 2023-12-12T23:17:46.402521203Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=29762f06-d63e-49a2-98dc-d02d806a1704 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.531154606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=22a6685b-1981-4f7f-b2eb-46cf2d4e183d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.531230342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=22a6685b-1981-4f7f-b2eb-46cf2d4e183d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.531522308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423105690011086,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d73c548272214ddd7ec2018b13818b4d9827389771a5caf012c51f7817ce2e9,PodSandboxId:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423083503044628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{io.kubernetes.container.hash: 917beb50,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954,PodSandboxId:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423082102496841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,},Annotations:map[string]string{io.kubernetes.container.hash: aba5f9a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088,PodSandboxId:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423074821162009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
fa659f1c-88de-406d-8183-bcac6f529efc,},Annotations:map[string]string{io.kubernetes.container.hash: a71fdecb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423074801770952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9,PodSandboxId:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423068133232023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 30862793aa821efa1cb278f711cf3bca,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9,PodSandboxId:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423067986906641,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,},An
notations:map[string]string{io.kubernetes.container.hash: 7d1f0931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee,PodSandboxId:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423067715211491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7c2b6fd5a437e6949a9892207f94280,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b,PodSandboxId:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423067579143370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
3a108b8450f638b4168b3bbc0ad86a2,},Annotations:map[string]string{io.kubernetes.container.hash: c9c10e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=22a6685b-1981-4f7f-b2eb-46cf2d4e183d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.553240948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dad7e7e0-a83d-48d6-9734-790cf6db3138 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.553381493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dad7e7e0-a83d-48d6-9734-790cf6db3138 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.554672471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=217d2996-ed1b-4aa2-a8b9-29cbbc0c1b44 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.555050134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423881555034973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=217d2996-ed1b-4aa2-a8b9-29cbbc0c1b44 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.555601477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=67a24d76-610c-48ed-9996-95f690e00358 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.555656721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=67a24d76-610c-48ed-9996-95f690e00358 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.555871383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423105690011086,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d73c548272214ddd7ec2018b13818b4d9827389771a5caf012c51f7817ce2e9,PodSandboxId:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423083503044628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{io.kubernetes.container.hash: 917beb50,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954,PodSandboxId:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423082102496841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,},Annotations:map[string]string{io.kubernetes.container.hash: aba5f9a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088,PodSandboxId:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423074821162009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
fa659f1c-88de-406d-8183-bcac6f529efc,},Annotations:map[string]string{io.kubernetes.container.hash: a71fdecb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423074801770952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9,PodSandboxId:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423068133232023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 30862793aa821efa1cb278f711cf3bca,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9,PodSandboxId:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423067986906641,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,},An
notations:map[string]string{io.kubernetes.container.hash: 7d1f0931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee,PodSandboxId:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423067715211491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7c2b6fd5a437e6949a9892207f94280,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b,PodSandboxId:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423067579143370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
3a108b8450f638b4168b3bbc0ad86a2,},Annotations:map[string]string{io.kubernetes.container.hash: c9c10e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=67a24d76-610c-48ed-9996-95f690e00358 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.601359513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=980ab84f-1cb6-48bf-aad9-2f108645249e name=/runtime.v1.RuntimeService/Version
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.601420805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=980ab84f-1cb6-48bf-aad9-2f108645249e name=/runtime.v1.RuntimeService/Version
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.603069315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=37ec43ee-f053-4756-b5b3-138df5bc4c1d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.603570403Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423881603556776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=37ec43ee-f053-4756-b5b3-138df5bc4c1d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.604409049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ddb38a84-fbc5-4939-8ea4-e80dd741f1d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.604458502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ddb38a84-fbc5-4939-8ea4-e80dd741f1d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.604710792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423105690011086,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d73c548272214ddd7ec2018b13818b4d9827389771a5caf012c51f7817ce2e9,PodSandboxId:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423083503044628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{io.kubernetes.container.hash: 917beb50,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954,PodSandboxId:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423082102496841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,},Annotations:map[string]string{io.kubernetes.container.hash: aba5f9a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088,PodSandboxId:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423074821162009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
fa659f1c-88de-406d-8183-bcac6f529efc,},Annotations:map[string]string{io.kubernetes.container.hash: a71fdecb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423074801770952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9,PodSandboxId:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423068133232023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 30862793aa821efa1cb278f711cf3bca,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9,PodSandboxId:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423067986906641,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,},An
notations:map[string]string{io.kubernetes.container.hash: 7d1f0931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee,PodSandboxId:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423067715211491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7c2b6fd5a437e6949a9892207f94280,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b,PodSandboxId:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423067579143370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
3a108b8450f638b4168b3bbc0ad86a2,},Annotations:map[string]string{io.kubernetes.container.hash: c9c10e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ddb38a84-fbc5-4939-8ea4-e80dd741f1d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.647531310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cba02b8d-318d-42e4-a380-53407e550c4b name=/runtime.v1.RuntimeService/Version
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.647622576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cba02b8d-318d-42e4-a380-53407e550c4b name=/runtime.v1.RuntimeService/Version
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.651731599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=020cb8c3-cf1f-4ff3-8e76-44bee78717ed name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.652128700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423881652109953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=020cb8c3-cf1f-4ff3-8e76-44bee78717ed name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.652869284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7922d087-ff72-47eb-be1f-dabf2d34e3f9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.652942999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7922d087-ff72-47eb-be1f-dabf2d34e3f9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:31:21 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:31:21.653134143Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423105690011086,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d73c548272214ddd7ec2018b13818b4d9827389771a5caf012c51f7817ce2e9,PodSandboxId:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423083503044628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{io.kubernetes.container.hash: 917beb50,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954,PodSandboxId:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423082102496841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,},Annotations:map[string]string{io.kubernetes.container.hash: aba5f9a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088,PodSandboxId:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423074821162009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
fa659f1c-88de-406d-8183-bcac6f529efc,},Annotations:map[string]string{io.kubernetes.container.hash: a71fdecb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423074801770952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9,PodSandboxId:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423068133232023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 30862793aa821efa1cb278f711cf3bca,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9,PodSandboxId:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423067986906641,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,},An
notations:map[string]string{io.kubernetes.container.hash: 7d1f0931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee,PodSandboxId:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423067715211491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7c2b6fd5a437e6949a9892207f94280,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b,PodSandboxId:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423067579143370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
3a108b8450f638b4168b3bbc0ad86a2,},Annotations:map[string]string{io.kubernetes.container.hash: c9c10e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7922d087-ff72-47eb-be1f-dabf2d34e3f9 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	61878856aa70b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   234d377ea9583       storage-provisioner
	2d73c54827221       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   e647fae6370e9       busybox
	79a5e815ba6ab       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   fef179b548667       coredns-5dd5756b68-nrpzf
	fb7f07b5f8eb1       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   61efdcb8ef867       kube-proxy-wjrjj
	8f486cf9b4b55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   234d377ea9583       storage-provisioner
	d45aa46de2dd0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   03cdf22f2cbcc       kube-scheduler-default-k8s-diff-port-850839
	57f9f49cbae33       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   0ebe21b79c6ec       etcd-default-k8s-diff-port-850839
	901c40ebab259       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   3d84dcf330299       kube-controller-manager-default-k8s-diff-port-850839
	71fd536d9f31c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   b4f73d9a01ecd       kube-apiserver-default-k8s-diff-port-850839
	
	* 
	* ==> coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53658 - 15063 "HINFO IN 2628023027677409627.8915963882515406296. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009459699s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-850839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-850839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=default-k8s-diff-port-850839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_09_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:09:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-850839
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:31:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:28:35 +0000   Tue, 12 Dec 2023 23:09:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:28:35 +0000   Tue, 12 Dec 2023 23:09:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:28:35 +0000   Tue, 12 Dec 2023 23:09:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:28:35 +0000   Tue, 12 Dec 2023 23:18:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    default-k8s-diff-port-850839
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c053d507b8d94034a23e89010a2bb079
	  System UUID:                c053d507-b8d9-4034-a23e-89010a2bb079
	  Boot ID:                    43f38a7a-c052-4bea-9ff4-1379c57765e8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-nrpzf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-850839                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-850839             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-850839    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-wjrjj                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-850839             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-zwzrg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-850839 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-850839 event: Registered Node default-k8s-diff-port-850839 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-850839 event: Registered Node default-k8s-diff-port-850839 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070886] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.504214] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.220427] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153315] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.628802] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.433457] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.120891] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.145475] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.131692] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.232611] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[ +17.607077] systemd-fstab-generator[919]: Ignoring "noauto" for root device
	[Dec12 23:18] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] <==
	* {"level":"info","ts":"2023-12-12T23:17:51.355156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:17:51.356551Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.180:2379"}
	{"level":"info","ts":"2023-12-12T23:17:51.356626Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:17:51.357195Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:17:51.357332Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:17:59.350923Z","caller":"traceutil/trace.go:171","msg":"trace[473719285] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"121.881144ms","start":"2023-12-12T23:17:59.229006Z","end":"2023-12-12T23:17:59.350887Z","steps":["trace[473719285] 'process raft request'  (duration: 121.770104ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:17:59.498501Z","caller":"traceutil/trace.go:171","msg":"trace[60084717] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"143.918042ms","start":"2023-12-12T23:17:59.354566Z","end":"2023-12-12T23:17:59.498484Z","steps":["trace[60084717] 'process raft request'  (duration: 122.868384ms)","trace[60084717] 'compare'  (duration: 20.90077ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:17:59.821567Z","caller":"traceutil/trace.go:171","msg":"trace[2071210752] linearizableReadLoop","detail":"{readStateIndex:573; appliedIndex:572; }","duration":"257.087334ms","start":"2023-12-12T23:17:59.564447Z","end":"2023-12-12T23:17:59.821534Z","steps":["trace[2071210752] 'read index received'  (duration: 245.01688ms)","trace[2071210752] 'applied index is now lower than readState.Index'  (duration: 12.069643ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:17:59.821711Z","caller":"traceutil/trace.go:171","msg":"trace[747053069] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"318.69518ms","start":"2023-12-12T23:17:59.502859Z","end":"2023-12-12T23:17:59.821554Z","steps":["trace[747053069] 'process raft request'  (duration: 306.654301ms)","trace[747053069] 'compare'  (duration: 11.918931ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:17:59.821738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.286578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" ","response":"range_response_count:1 size:3866"}
	{"level":"info","ts":"2023-12-12T23:17:59.821811Z","caller":"traceutil/trace.go:171","msg":"trace[1581602737] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg; range_end:; response_count:1; response_revision:544; }","duration":"257.373545ms","start":"2023-12-12T23:17:59.564424Z","end":"2023-12-12T23:17:59.821797Z","steps":["trace[1581602737] 'agreement among raft nodes before linearized reading'  (duration: 257.229736ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:17:59.822063Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.395449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-850839\" ","response":"range_response_count:1 size:5714"}
	{"level":"info","ts":"2023-12-12T23:17:59.822133Z","caller":"traceutil/trace.go:171","msg":"trace[1936932577] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-850839; range_end:; response_count:1; response_revision:544; }","duration":"243.471074ms","start":"2023-12-12T23:17:59.578656Z","end":"2023-12-12T23:17:59.822127Z","steps":["trace[1936932577] 'agreement among raft nodes before linearized reading'  (duration: 243.373155ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:17:59.822308Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:17:59.502841Z","time spent":"318.9257ms","remote":"127.0.0.1:53236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-zwzrg.17a038c54faf94a0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-zwzrg.17a038c54faf94a0\" value_size:690 lease:3934048619837274232 >> failure:<>"}
	{"level":"warn","ts":"2023-12-12T23:18:00.216375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.871052ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13157420656692050422 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" mod_revision:455 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" value_size:4000 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T23:18:00.2168Z","caller":"traceutil/trace.go:171","msg":"trace[1389807667] linearizableReadLoop","detail":"{readStateIndex:576; appliedIndex:574; }","duration":"138.587558ms","start":"2023-12-12T23:18:00.078199Z","end":"2023-12-12T23:18:00.216787Z","steps":["trace[1389807667] 'read index received'  (duration: 6.903448ms)","trace[1389807667] 'applied index is now lower than readState.Index'  (duration: 131.683501ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:18:00.21715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.955726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-850839\" ","response":"range_response_count:1 size:5714"}
	{"level":"info","ts":"2023-12-12T23:18:00.217223Z","caller":"traceutil/trace.go:171","msg":"trace[1088867811] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-850839; range_end:; response_count:1; response_revision:547; }","duration":"139.036697ms","start":"2023-12-12T23:18:00.078175Z","end":"2023-12-12T23:18:00.217212Z","steps":["trace[1088867811] 'agreement among raft nodes before linearized reading'  (duration: 138.89149ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:18:00.217613Z","caller":"traceutil/trace.go:171","msg":"trace[337745353] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"388.978302ms","start":"2023-12-12T23:17:59.82862Z","end":"2023-12-12T23:18:00.217598Z","steps":["trace[337745353] 'process raft request'  (duration: 256.452084ms)","trace[337745353] 'compare'  (duration: 130.509168ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:18:00.217741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:17:59.828608Z","time spent":"389.071691ms","remote":"127.0.0.1:53260","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4066,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" mod_revision:455 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" value_size:4000 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" > >"}
	{"level":"info","ts":"2023-12-12T23:18:00.217926Z","caller":"traceutil/trace.go:171","msg":"trace[455270054] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"330.942934ms","start":"2023-12-12T23:17:59.886972Z","end":"2023-12-12T23:18:00.217915Z","steps":["trace[455270054] 'process raft request'  (duration: 329.750265ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:18:00.218042Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:17:59.886949Z","time spent":"331.060478ms","remote":"127.0.0.1:53236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":789,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-zwzrg.17a038c562c44e15\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-zwzrg.17a038c562c44e15\" value_size:694 lease:3934048619837274232 >> failure:<>"}
	{"level":"info","ts":"2023-12-12T23:27:51.394939Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":818}
	{"level":"info","ts":"2023-12-12T23:27:51.398542Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":818,"took":"2.533699ms","hash":2145782772}
	{"level":"info","ts":"2023-12-12T23:27:51.398627Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2145782772,"revision":818,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  23:31:22 up 14 min,  0 users,  load average: 0.23, 0.16, 0.10
	Linux default-k8s-diff-port-850839 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] <==
	* I1212 23:27:53.202111       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:27:54.201916       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:27:54.202060       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:27:54.202075       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:27:54.202170       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:27:54.202231       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:27:54.203603       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:28:52.981754       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:28:54.202642       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:28:54.202751       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:28:54.202783       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:28:54.204007       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:28:54.204075       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:28:54.204099       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:29:52.982456       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 23:30:52.981799       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:30:54.203373       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:30:54.203517       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:30:54.203606       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:30:54.204473       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:30:54.204615       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:30:54.204654       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] <==
	* I1212 23:25:36.244049       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:26:05.751651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:26:06.253181       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:26:35.758615       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:26:36.263808       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:27:05.764724       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:27:06.272622       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:27:35.770571       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:27:36.282610       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:28:05.777035       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:28:06.293107       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:28:35.782845       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:28:36.303445       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:29:05.788662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:29:06.312685       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:29:08.506167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="2.509738ms"
	I1212 23:29:22.504163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="296.846µs"
	E1212 23:29:35.797543       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:29:36.323011       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:30:05.802717       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:30:06.332368       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:30:35.808859       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:30:36.341371       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:31:05.816905       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:31:06.350725       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] <==
	* I1212 23:17:55.355741       1 server_others.go:69] "Using iptables proxy"
	I1212 23:17:55.371348       1 node.go:141] Successfully retrieved node IP: 192.168.39.180
	I1212 23:17:55.427677       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:17:55.427722       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:17:55.431906       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:17:55.431970       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:17:55.432206       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:17:55.432242       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:17:55.433723       1 config.go:188] "Starting service config controller"
	I1212 23:17:55.433733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:17:55.433747       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:17:55.433750       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:17:55.434324       1 config.go:315] "Starting node config controller"
	I1212 23:17:55.434332       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:17:55.534879       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:17:55.534949       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:17:55.534996       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] <==
	* I1212 23:17:50.457858       1 serving.go:348] Generated self-signed cert in-memory
	W1212 23:17:53.096894       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 23:17:53.097020       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:17:53.097056       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 23:17:53.097086       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 23:17:53.203968       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 23:17:53.204035       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:17:53.209921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 23:17:53.210095       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 23:17:53.210126       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:17:53.210146       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 23:17:53.310586       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:17:15 UTC, ends at Tue 2023-12-12 23:31:22 UTC. --
	Dec 12 23:28:46 default-k8s-diff-port-850839 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:28:46 default-k8s-diff-port-850839 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:28:57 default-k8s-diff-port-850839 kubelet[925]: E1212 23:28:57.494200     925 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 12 23:28:57 default-k8s-diff-port-850839 kubelet[925]: E1212 23:28:57.494329     925 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 12 23:28:57 default-k8s-diff-port-850839 kubelet[925]: E1212 23:28:57.494529     925 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-j58gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-zwzrg_kube-system(8b0d823e-df34-45eb-807c-17d8a9178bb8): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 23:28:57 default-k8s-diff-port-850839 kubelet[925]: E1212 23:28:57.494566     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:29:08 default-k8s-diff-port-850839 kubelet[925]: E1212 23:29:08.484939     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:29:22 default-k8s-diff-port-850839 kubelet[925]: E1212 23:29:22.485426     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:29:37 default-k8s-diff-port-850839 kubelet[925]: E1212 23:29:37.483199     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:29:46 default-k8s-diff-port-850839 kubelet[925]: E1212 23:29:46.500920     925 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:29:46 default-k8s-diff-port-850839 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:29:46 default-k8s-diff-port-850839 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:29:46 default-k8s-diff-port-850839 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:29:48 default-k8s-diff-port-850839 kubelet[925]: E1212 23:29:48.483688     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:30:02 default-k8s-diff-port-850839 kubelet[925]: E1212 23:30:02.482482     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:30:13 default-k8s-diff-port-850839 kubelet[925]: E1212 23:30:13.482955     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:30:24 default-k8s-diff-port-850839 kubelet[925]: E1212 23:30:24.482022     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:30:36 default-k8s-diff-port-850839 kubelet[925]: E1212 23:30:36.483021     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:30:46 default-k8s-diff-port-850839 kubelet[925]: E1212 23:30:46.500967     925 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:30:46 default-k8s-diff-port-850839 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:30:46 default-k8s-diff-port-850839 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:30:46 default-k8s-diff-port-850839 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:30:47 default-k8s-diff-port-850839 kubelet[925]: E1212 23:30:47.483002     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:31:00 default-k8s-diff-port-850839 kubelet[925]: E1212 23:31:00.483693     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:31:11 default-k8s-diff-port-850839 kubelet[925]: E1212 23:31:11.482763     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	
	* 
	* ==> storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] <==
	* I1212 23:18:25.827142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:18:25.845716       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:18:25.845819       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:18:43.254598       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:18:43.255026       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850839_6fc1a817-14fc-4a2a-a8fd-e40030fa1c47!
	I1212 23:18:43.255180       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac91ee07-d761-40c2-b0b7-efbc653bb61d", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-850839_6fc1a817-14fc-4a2a-a8fd-e40030fa1c47 became leader
	I1212 23:18:43.355699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850839_6fc1a817-14fc-4a2a-a8fd-e40030fa1c47!
	
	* 
	* ==> storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] <==
	* I1212 23:17:55.279918       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 23:18:25.284901       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-850839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-zwzrg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-850839 describe pod metrics-server-57f55c9bc5-zwzrg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850839 describe pod metrics-server-57f55c9bc5-zwzrg: exit status 1 (73.195461ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-zwzrg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-850839 describe pod metrics-server-57f55c9bc5-zwzrg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 23:23:13.361579   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-115023 -n no-preload-115023
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-12 23:32:10.854475661 +0000 UTC m=+5370.665533101
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-115023 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-115023 logs -n 25: (1.587993866s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-828988 sudo cat                              | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo find                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo crio                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-828988                                       | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-685244 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | disable-driver-mounts-685244                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:09 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-809120            | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-549640        | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-115023             | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850839  | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-809120                 | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-549640             | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-115023                  | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850839       | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:22 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:12:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:12:31.006246  128282 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:12:31.006380  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006389  128282 out.go:309] Setting ErrFile to fd 2...
	I1212 23:12:31.006393  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006549  128282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:12:31.007106  128282 out.go:303] Setting JSON to false
	I1212 23:12:31.008035  128282 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14105,"bootTime":1702408646,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:12:31.008097  128282 start.go:138] virtualization: kvm guest
	I1212 23:12:31.010317  128282 out.go:177] * [default-k8s-diff-port-850839] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:12:31.011782  128282 notify.go:220] Checking for updates...
	I1212 23:12:31.011787  128282 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:12:31.013177  128282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:12:31.014626  128282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:12:31.016153  128282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:12:31.017420  128282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:12:31.018789  128282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:12:31.020548  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:12:31.021022  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.021073  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.036337  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I1212 23:12:31.036724  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.037285  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.037315  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.037677  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.037910  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.038190  128282 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:12:31.038482  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.038521  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.052455  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
	I1212 23:12:31.052897  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.053408  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.053428  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.053842  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.054041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.090916  128282 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:12:31.092159  128282 start.go:298] selected driver: kvm2
	I1212 23:12:31.092174  128282 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.092313  128282 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:12:31.092991  128282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.093081  128282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:12:31.108612  128282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:12:31.108979  128282 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:12:31.109050  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:12:31.109064  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:12:31.109078  128282 start_flags.go:323] config:
	{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85083
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.109261  128282 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.110991  128282 out.go:177] * Starting control plane node default-k8s-diff-port-850839 in cluster default-k8s-diff-port-850839
	I1212 23:12:28.611488  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:31.112184  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:12:31.112223  128282 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:12:31.112231  128282 cache.go:56] Caching tarball of preloaded images
	I1212 23:12:31.112315  128282 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:12:31.112331  128282 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:12:31.112435  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:12:31.112621  128282 start.go:365] acquiring machines lock for default-k8s-diff-port-850839: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:12:34.691505  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:37.763538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:43.843515  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:46.915553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:52.995487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:56.067468  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:02.147575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:05.219586  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:11.299553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:14.371547  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:20.451538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:23.523565  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:29.603544  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:32.675516  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:38.755580  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:41.827595  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:47.907601  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:50.979707  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:57.059532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:00.131511  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:06.211489  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:09.283534  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:15.363535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:18.435583  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:24.515478  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:27.587546  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:33.667567  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:36.739532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:42.819531  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:45.891616  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:51.971509  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:55.043560  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:01.123510  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:04.195575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:10.275535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:13.347520  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:19.427542  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:22.499524  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:28.579575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:31.651552  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:37.731535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:40.803533  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:46.883561  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:49.955571  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:56.035557  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:59.107536  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:05.187487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:08.259527  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:14.339497  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:17.411598  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:20.416121  127900 start.go:369] acquired machines lock for "old-k8s-version-549640" in 4m27.702597236s
	I1212 23:16:20.416185  127900 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:20.416197  127900 fix.go:54] fixHost starting: 
	I1212 23:16:20.416598  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:20.416638  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:20.431626  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I1212 23:16:20.432088  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:20.432550  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:16:20.432573  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:20.432976  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:20.433174  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:20.433352  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:16:20.435450  127900 fix.go:102] recreateIfNeeded on old-k8s-version-549640: state=Stopped err=<nil>
	I1212 23:16:20.435477  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	W1212 23:16:20.435650  127900 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:20.437467  127900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-549640" ...
	I1212 23:16:20.438890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Start
	I1212 23:16:20.439060  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring networks are active...
	I1212 23:16:20.439992  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network default is active
	I1212 23:16:20.440387  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network mk-old-k8s-version-549640 is active
	I1212 23:16:20.440738  127900 main.go:141] libmachine: (old-k8s-version-549640) Getting domain xml...
	I1212 23:16:20.441435  127900 main.go:141] libmachine: (old-k8s-version-549640) Creating domain...
	I1212 23:16:21.692826  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting to get IP...
	I1212 23:16:21.693784  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.694269  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.694313  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.694229  128878 retry.go:31] will retry after 250.302126ms: waiting for machine to come up
	I1212 23:16:21.945651  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.946122  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.946145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.946067  128878 retry.go:31] will retry after 271.460868ms: waiting for machine to come up
	I1212 23:16:22.219848  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.220326  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.220352  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.220248  128878 retry.go:31] will retry after 466.723624ms: waiting for machine to come up
	I1212 23:16:20.413611  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:20.413648  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:16:20.415967  127760 machine.go:91] provisioned docker machine in 4m37.407647774s
	I1212 23:16:20.416013  127760 fix.go:56] fixHost completed within 4m37.429684827s
	I1212 23:16:20.416025  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 4m37.429713708s
	W1212 23:16:20.416055  127760 start.go:694] error starting host: provision: host is not running
	W1212 23:16:20.416230  127760 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 23:16:20.416241  127760 start.go:709] Will try again in 5 seconds ...
	I1212 23:16:22.689020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.689524  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.689559  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.689474  128878 retry.go:31] will retry after 384.986526ms: waiting for machine to come up
	I1212 23:16:23.076020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.076428  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.076462  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.076365  128878 retry.go:31] will retry after 673.784203ms: waiting for machine to come up
	I1212 23:16:23.752374  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.752825  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.752859  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.752777  128878 retry.go:31] will retry after 744.371791ms: waiting for machine to come up
	I1212 23:16:24.498624  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:24.499057  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:24.499088  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:24.498994  128878 retry.go:31] will retry after 1.095766265s: waiting for machine to come up
	I1212 23:16:25.596742  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:25.597192  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:25.597217  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:25.597133  128878 retry.go:31] will retry after 1.340596782s: waiting for machine to come up
	I1212 23:16:26.939593  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:26.939933  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:26.939957  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:26.939881  128878 retry.go:31] will retry after 1.546075974s: waiting for machine to come up
	I1212 23:16:25.417922  127760 start.go:365] acquiring machines lock for embed-certs-809120: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:16:28.488543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:28.488923  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:28.488949  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:28.488883  128878 retry.go:31] will retry after 2.06517547s: waiting for machine to come up
	I1212 23:16:30.555809  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:30.556300  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:30.556330  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:30.556262  128878 retry.go:31] will retry after 2.237409729s: waiting for machine to come up
	I1212 23:16:32.796273  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:32.796684  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:32.796712  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:32.796629  128878 retry.go:31] will retry after 3.535954383s: waiting for machine to come up
	I1212 23:16:36.333758  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:36.334211  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:36.334243  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:36.334143  128878 retry.go:31] will retry after 3.820382113s: waiting for machine to come up
	I1212 23:16:41.367963  128156 start.go:369] acquired machines lock for "no-preload-115023" in 4m21.778030837s
	I1212 23:16:41.368034  128156 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:41.368046  128156 fix.go:54] fixHost starting: 
	I1212 23:16:41.368459  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:41.368498  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:41.384557  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1212 23:16:41.385004  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:41.385448  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:16:41.385471  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:41.385799  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:41.386007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:16:41.386192  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:16:41.387807  128156 fix.go:102] recreateIfNeeded on no-preload-115023: state=Stopped err=<nil>
	I1212 23:16:41.387858  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	W1212 23:16:41.388030  128156 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:41.390189  128156 out.go:177] * Restarting existing kvm2 VM for "no-preload-115023" ...
	I1212 23:16:40.159111  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159503  127900 main.go:141] libmachine: (old-k8s-version-549640) Found IP for machine: 192.168.61.146
	I1212 23:16:40.159530  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserving static IP address...
	I1212 23:16:40.159543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has current primary IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159970  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.160042  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | skip adding static IP to network mk-old-k8s-version-549640 - found existing host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"}
	I1212 23:16:40.160060  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserved static IP address: 192.168.61.146
	I1212 23:16:40.160072  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for SSH to be available...
	I1212 23:16:40.160087  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Getting to WaitForSSH function...
	I1212 23:16:40.162048  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162377  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.162417  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162498  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH client type: external
	I1212 23:16:40.162571  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa (-rw-------)
	I1212 23:16:40.162609  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:16:40.162629  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | About to run SSH command:
	I1212 23:16:40.162644  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | exit 0
	I1212 23:16:40.254804  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | SSH cmd err, output: <nil>: 
	I1212 23:16:40.255235  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetConfigRaw
	I1212 23:16:40.255885  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.258196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258526  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.258551  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258806  127900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/config.json ...
	I1212 23:16:40.259036  127900 machine.go:88] provisioning docker machine ...
	I1212 23:16:40.259059  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:40.259292  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259454  127900 buildroot.go:166] provisioning hostname "old-k8s-version-549640"
	I1212 23:16:40.259475  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259624  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.261311  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261561  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.261583  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261686  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.261818  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.261974  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.262114  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.262270  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.262645  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.262666  127900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-549640 && echo "old-k8s-version-549640" | sudo tee /etc/hostname
	I1212 23:16:40.395342  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-549640
	
	I1212 23:16:40.395376  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.398008  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398391  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.398430  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398533  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.398716  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.398890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.399024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.399152  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.399489  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.399510  127900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-549640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-549640/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-549640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:40.526781  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:40.526824  127900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:16:40.526847  127900 buildroot.go:174] setting up certificates
	I1212 23:16:40.526859  127900 provision.go:83] configureAuth start
	I1212 23:16:40.526877  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.527276  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.530483  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.530876  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.530908  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.531162  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.533161  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533456  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.533488  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533567  127900 provision.go:138] copyHostCerts
	I1212 23:16:40.533625  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:16:40.533645  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:16:40.533711  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:16:40.533799  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:16:40.533806  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:16:40.533829  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:16:40.533882  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:16:40.533889  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:16:40.533913  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:16:40.533957  127900 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-549640 san=[192.168.61.146 192.168.61.146 localhost 127.0.0.1 minikube old-k8s-version-549640]
	I1212 23:16:40.630542  127900 provision.go:172] copyRemoteCerts
	I1212 23:16:40.630611  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:40.630639  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.633145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633408  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.633433  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633579  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.633790  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.633944  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.634162  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:40.725498  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 23:16:40.748097  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:16:40.769852  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:16:40.791381  127900 provision.go:86] duration metric: configureAuth took 264.501961ms
	I1212 23:16:40.791417  127900 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:40.791602  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:16:40.791678  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.794113  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794479  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.794514  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794653  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.794864  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795055  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795234  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.795443  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.795777  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.795807  127900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:16:41.103469  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:16:41.103503  127900 machine.go:91] provisioned docker machine in 844.450063ms
	I1212 23:16:41.103517  127900 start.go:300] post-start starting for "old-k8s-version-549640" (driver="kvm2")
	I1212 23:16:41.103527  127900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:41.103547  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.103894  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:41.103923  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.106459  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.106835  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.106864  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.107013  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.107190  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.107363  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.107532  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.201177  127900 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:41.205686  127900 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:41.205711  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:16:41.205773  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:16:41.205862  127900 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:16:41.205970  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:41.214591  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:41.240854  127900 start.go:303] post-start completed in 137.32025ms
	I1212 23:16:41.240885  127900 fix.go:56] fixHost completed within 20.824687398s
	I1212 23:16:41.240915  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.243633  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244071  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.244104  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244300  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.244517  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244651  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244806  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.244981  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:41.245337  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:41.245350  127900 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:16:41.367815  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423001.317394085
	
	I1212 23:16:41.367837  127900 fix.go:206] guest clock: 1702423001.317394085
	I1212 23:16:41.367844  127900 fix.go:219] Guest: 2023-12-12 23:16:41.317394085 +0000 UTC Remote: 2023-12-12 23:16:41.240889292 +0000 UTC m=+288.685284781 (delta=76.504793ms)
	I1212 23:16:41.367863  127900 fix.go:190] guest clock delta is within tolerance: 76.504793ms
	I1212 23:16:41.367868  127900 start.go:83] releasing machines lock for "old-k8s-version-549640", held for 20.951706122s
	I1212 23:16:41.367895  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.368219  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:41.370769  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371172  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.371196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371378  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.371904  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372069  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372157  127900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:16:41.372206  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.372409  127900 ssh_runner.go:195] Run: cat /version.json
	I1212 23:16:41.372438  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.374847  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.374869  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375341  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375373  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375401  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375419  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375526  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375661  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375749  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.375835  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.376026  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376031  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.488636  127900 ssh_runner.go:195] Run: systemctl --version
	I1212 23:16:41.494315  127900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:16:41.645474  127900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:16:41.652912  127900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:16:41.652988  127900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:16:41.667662  127900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:16:41.667680  127900 start.go:475] detecting cgroup driver to use...
	I1212 23:16:41.667747  127900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:16:41.681625  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:16:41.693475  127900 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:16:41.693540  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:16:41.705743  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:16:41.719152  127900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:16:41.819641  127900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:16:41.929543  127900 docker.go:219] disabling docker service ...
	I1212 23:16:41.929617  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:16:41.943407  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:16:41.955372  127900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:16:42.063078  127900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:16:42.177422  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:16:42.192994  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:16:42.211887  127900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 23:16:42.211943  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.223418  127900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:16:42.223486  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.234905  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.245973  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.261016  127900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:16:42.272819  127900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:16:42.283308  127900 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:16:42.283381  127900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:16:42.296365  127900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:16:42.307038  127900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:16:42.412672  127900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:16:42.590363  127900 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:16:42.590470  127900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:16:42.596285  127900 start.go:543] Will wait 60s for crictl version
	I1212 23:16:42.596360  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:42.600633  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:16:42.638709  127900 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:16:42.638811  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.694435  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.750327  127900 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 23:16:41.391501  128156 main.go:141] libmachine: (no-preload-115023) Calling .Start
	I1212 23:16:41.391671  128156 main.go:141] libmachine: (no-preload-115023) Ensuring networks are active...
	I1212 23:16:41.392314  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network default is active
	I1212 23:16:41.392624  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network mk-no-preload-115023 is active
	I1212 23:16:41.393075  128156 main.go:141] libmachine: (no-preload-115023) Getting domain xml...
	I1212 23:16:41.393720  128156 main.go:141] libmachine: (no-preload-115023) Creating domain...
	I1212 23:16:42.669200  128156 main.go:141] libmachine: (no-preload-115023) Waiting to get IP...
	I1212 23:16:42.670068  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.670482  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.670582  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.670462  128998 retry.go:31] will retry after 201.350715ms: waiting for machine to come up
	I1212 23:16:42.874061  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.874543  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.874576  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.874492  128998 retry.go:31] will retry after 331.205906ms: waiting for machine to come up
	I1212 23:16:43.207045  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.207590  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.207618  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.207533  128998 retry.go:31] will retry after 343.139691ms: waiting for machine to come up
	I1212 23:16:43.552253  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.552737  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.552769  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.552683  128998 retry.go:31] will retry after 606.192126ms: waiting for machine to come up
	I1212 23:16:44.160409  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.160877  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.160923  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.160842  128998 retry.go:31] will retry after 713.164162ms: waiting for machine to come up
	I1212 23:16:42.751897  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:42.754490  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.754832  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:42.754867  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.755047  127900 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 23:16:42.759290  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:42.770851  127900 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 23:16:42.770913  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:42.822484  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:42.822559  127900 ssh_runner.go:195] Run: which lz4
	I1212 23:16:42.826907  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:16:42.831601  127900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:16:42.831633  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 23:16:44.643588  127900 crio.go:444] Took 1.816704 seconds to copy over tarball
	I1212 23:16:44.643671  127900 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:16:47.603870  127900 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960150759s)
	I1212 23:16:47.603904  127900 crio.go:451] Took 2.960288 seconds to extract the tarball
	I1212 23:16:47.603918  127900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:16:44.875548  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.875971  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.876003  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.875908  128998 retry.go:31] will retry after 928.762857ms: waiting for machine to come up
	I1212 23:16:45.806556  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:45.806983  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:45.807019  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:45.806932  128998 retry.go:31] will retry after 945.322601ms: waiting for machine to come up
	I1212 23:16:46.754374  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:46.754834  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:46.754869  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:46.754818  128998 retry.go:31] will retry after 1.373584303s: waiting for machine to come up
	I1212 23:16:48.130434  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:48.130917  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:48.130950  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:48.130870  128998 retry.go:31] will retry after 1.683447661s: waiting for machine to come up
	I1212 23:16:47.644193  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:47.696129  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:47.696156  127900 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.696314  127900 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.696273  127900 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.696242  127900 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.696306  127900 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.696371  127900 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.696445  127900 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 23:16:47.697649  127900 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.697713  127900 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.697816  127900 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.697955  127900 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 23:16:47.698013  127900 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.698109  127900 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.698124  127900 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.698341  127900 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.888397  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.897712  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.897790  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.910016  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 23:16:47.911074  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.912891  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.923071  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.995042  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:48.022161  127900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 23:16:48.022215  127900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.022270  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053132  127900 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 23:16:48.053181  127900 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.053236  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053493  127900 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 23:16:48.053531  127900 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.053588  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.123888  127900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 23:16:48.123949  127900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.123889  127900 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 23:16:48.124009  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124022  127900 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 23:16:48.124077  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124089  127900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 23:16:48.124111  127900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 23:16:48.124141  127900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.124171  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124115  127900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.124249  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.205456  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.205488  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.205609  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.205650  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.205702  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 23:16:48.205789  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.205814  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.351665  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 23:16:48.351700  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 23:16:48.360026  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 23:16:48.363255  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 23:16:48.363297  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 23:16:48.363376  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 23:16:48.363413  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 23:16:48.363525  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369271  127900 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 23:16:48.369289  127900 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369326  127900 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 23:16:50.628595  127900 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.259242667s)
	I1212 23:16:50.628629  127900 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 23:16:50.628679  127900 cache_images.go:92] LoadImages completed in 2.932510127s
	W1212 23:16:50.628774  127900 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 23:16:50.628871  127900 ssh_runner.go:195] Run: crio config
	I1212 23:16:50.696623  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:16:50.696645  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:16:50.696665  127900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:16:50.696690  127900 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.146 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-549640 NodeName:old-k8s-version-549640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 23:16:50.696857  127900 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-549640"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-549640
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.146:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:16:50.696950  127900 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-549640 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:16:50.697013  127900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 23:16:50.706222  127900 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:16:50.706309  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:16:50.714679  127900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 23:16:50.732119  127900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:16:50.749596  127900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 23:16:50.766445  127900 ssh_runner.go:195] Run: grep 192.168.61.146	control-plane.minikube.internal$ /etc/hosts
	I1212 23:16:50.770611  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:50.783162  127900 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640 for IP: 192.168.61.146
	I1212 23:16:50.783205  127900 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:16:50.783434  127900 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:16:50.783504  127900 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:16:50.783623  127900 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.key
	I1212 23:16:50.783701  127900 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key.a124ebb4
	I1212 23:16:50.783781  127900 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key
	I1212 23:16:50.784002  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:16:50.784053  127900 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:16:50.784070  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:16:50.784118  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:16:50.784162  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:16:50.784201  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:16:50.784260  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:50.785202  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:16:50.813072  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:16:50.838714  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:16:50.863302  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:16:50.891365  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:16:50.916623  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:16:50.946894  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:16:50.974859  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:16:51.002629  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:16:51.027782  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:16:51.052384  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:16:51.077430  127900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:16:51.094703  127900 ssh_runner.go:195] Run: openssl version
	I1212 23:16:51.100625  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:16:51.111038  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116246  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116342  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.122069  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:16:51.132325  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:16:51.142392  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147278  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147353  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.153446  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:16:51.163491  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:16:51.173393  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178482  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178560  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.184710  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:16:51.194819  127900 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:16:51.199808  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:16:51.206208  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:16:51.212498  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:16:51.218555  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:16:51.224923  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:16:51.231298  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:16:51.237570  127900 kubeadm.go:404] StartCluster: {Name:old-k8s-version-549640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:16:51.237672  127900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:16:51.237752  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:16:51.283890  127900 cri.go:89] found id: ""
	I1212 23:16:51.283985  127900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:16:51.296861  127900 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:16:51.296897  127900 kubeadm.go:636] restartCluster start
	I1212 23:16:51.296990  127900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:16:51.306034  127900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.307730  127900 kubeconfig.go:92] found "old-k8s-version-549640" server: "https://192.168.61.146:8443"
	I1212 23:16:51.311721  127900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:16:51.320683  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.320831  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.332122  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.332145  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.332197  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.342755  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.843464  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.843575  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.854933  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:52.343493  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.343579  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.354884  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:49.816605  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:49.816934  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:49.816968  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:49.816881  128998 retry.go:31] will retry after 1.775884699s: waiting for machine to come up
	I1212 23:16:51.594388  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:51.594915  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:51.594952  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:51.594866  128998 retry.go:31] will retry after 1.948886075s: waiting for machine to come up
	I1212 23:16:53.546035  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:53.546503  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:53.546538  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:53.546441  128998 retry.go:31] will retry after 3.530621748s: waiting for machine to come up
	I1212 23:16:52.842987  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.843085  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.854637  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.343155  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.343261  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.354960  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.843482  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.843555  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.854488  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.342926  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.343028  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.357489  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.843024  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.843111  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.854764  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.343252  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.343363  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.354798  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.843831  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.843931  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.855077  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.343753  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.343827  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.354659  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.843304  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.843423  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.854727  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.343292  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.343428  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.354360  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.078854  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:57.079265  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:57.079287  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:57.079224  128998 retry.go:31] will retry after 3.552473985s: waiting for machine to come up
	I1212 23:17:01.924642  128282 start.go:369] acquired machines lock for "default-k8s-diff-port-850839" in 4m30.811975302s
	I1212 23:17:01.924716  128282 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:01.924725  128282 fix.go:54] fixHost starting: 
	I1212 23:17:01.925164  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:01.925207  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:01.942895  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I1212 23:17:01.943340  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:01.943906  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:01.943938  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:01.944371  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:01.944594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:01.944819  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:01.946719  128282 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850839: state=Stopped err=<nil>
	I1212 23:17:01.946759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	W1212 23:17:01.946947  128282 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:01.949597  128282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850839" ...
	I1212 23:16:57.843410  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.843484  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.854821  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.343379  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.343470  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.354868  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.843473  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.843594  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.854752  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.343324  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.343432  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.354442  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.842979  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.843086  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.854537  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.343125  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.343201  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.354401  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.843565  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.843642  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.854663  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:01.321433  127900 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:01.321466  127900 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:01.321477  127900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:01.321534  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:01.361643  127900 cri.go:89] found id: ""
	I1212 23:17:01.361739  127900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:01.380002  127900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:01.388875  127900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:01.388944  127900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397644  127900 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397690  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:01.528111  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:00.635998  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636444  128156 main.go:141] libmachine: (no-preload-115023) Found IP for machine: 192.168.72.32
	I1212 23:17:00.636462  128156 main.go:141] libmachine: (no-preload-115023) Reserving static IP address...
	I1212 23:17:00.636478  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has current primary IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.636925  128156 main.go:141] libmachine: (no-preload-115023) DBG | skip adding static IP to network mk-no-preload-115023 - found existing host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"}
	I1212 23:17:00.636939  128156 main.go:141] libmachine: (no-preload-115023) Reserved static IP address: 192.168.72.32
	I1212 23:17:00.636961  128156 main.go:141] libmachine: (no-preload-115023) Waiting for SSH to be available...
	I1212 23:17:00.636971  128156 main.go:141] libmachine: (no-preload-115023) DBG | Getting to WaitForSSH function...
	I1212 23:17:00.639074  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639400  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.639443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639546  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH client type: external
	I1212 23:17:00.639586  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa (-rw-------)
	I1212 23:17:00.639629  128156 main.go:141] libmachine: (no-preload-115023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:00.639644  128156 main.go:141] libmachine: (no-preload-115023) DBG | About to run SSH command:
	I1212 23:17:00.639663  128156 main.go:141] libmachine: (no-preload-115023) DBG | exit 0
	I1212 23:17:00.734735  128156 main.go:141] libmachine: (no-preload-115023) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:00.735132  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetConfigRaw
	I1212 23:17:00.735813  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:00.738429  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.738828  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.738871  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.739049  128156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/config.json ...
	I1212 23:17:00.739276  128156 machine.go:88] provisioning docker machine ...
	I1212 23:17:00.739299  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:00.739537  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739695  128156 buildroot.go:166] provisioning hostname "no-preload-115023"
	I1212 23:17:00.739717  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739879  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.742096  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742404  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.742443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742591  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.742756  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.742925  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.743067  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.743224  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.743733  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.743751  128156 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-115023 && echo "no-preload-115023" | sudo tee /etc/hostname
	I1212 23:17:00.888573  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-115023
	
	I1212 23:17:00.888610  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.891302  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891619  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.891664  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891852  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.892092  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892263  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892419  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.892584  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.892911  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.892930  128156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-115023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-115023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-115023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:01.032180  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:01.032222  128156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:01.032257  128156 buildroot.go:174] setting up certificates
	I1212 23:17:01.032273  128156 provision.go:83] configureAuth start
	I1212 23:17:01.032291  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:01.032653  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.035024  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035334  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.035361  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035494  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.037594  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.037898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.037930  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.038066  128156 provision.go:138] copyHostCerts
	I1212 23:17:01.038122  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:01.038143  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:01.038202  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:01.038322  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:01.038334  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:01.038355  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:01.038470  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:01.038481  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:01.038499  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:01.038575  128156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.no-preload-115023 san=[192.168.72.32 192.168.72.32 localhost 127.0.0.1 minikube no-preload-115023]
	I1212 23:17:01.146965  128156 provision.go:172] copyRemoteCerts
	I1212 23:17:01.147027  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:01.147053  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.149326  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149621  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.149656  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149774  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.149969  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.150118  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.150238  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.244271  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:01.267206  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 23:17:01.289286  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:01.311940  128156 provision.go:86] duration metric: configureAuth took 279.648376ms
	I1212 23:17:01.311970  128156 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:01.312144  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:17:01.312229  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.314543  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.314881  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.314907  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.315055  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.315281  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315469  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315658  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.315821  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.316162  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.316185  128156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:01.644687  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:01.644737  128156 machine.go:91] provisioned docker machine in 905.44182ms
	I1212 23:17:01.644750  128156 start.go:300] post-start starting for "no-preload-115023" (driver="kvm2")
	I1212 23:17:01.644764  128156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:01.644781  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.645148  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:01.645186  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.647976  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648333  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.648369  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648572  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.648769  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.648972  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.649102  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.746191  128156 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:01.750374  128156 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:01.750416  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:01.750499  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:01.750605  128156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:01.750721  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:01.760389  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:01.788014  128156 start.go:303] post-start completed in 143.244652ms
	I1212 23:17:01.788052  128156 fix.go:56] fixHost completed within 20.420006869s
	I1212 23:17:01.788083  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.790868  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791357  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.791392  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791675  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.791911  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792276  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.792463  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.792889  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.792903  128156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:01.924437  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423021.865464875
	
	I1212 23:17:01.924464  128156 fix.go:206] guest clock: 1702423021.865464875
	I1212 23:17:01.924477  128156 fix.go:219] Guest: 2023-12-12 23:17:01.865464875 +0000 UTC Remote: 2023-12-12 23:17:01.788058057 +0000 UTC m=+282.352654726 (delta=77.406818ms)
	I1212 23:17:01.924532  128156 fix.go:190] guest clock delta is within tolerance: 77.406818ms
	I1212 23:17:01.924542  128156 start.go:83] releasing machines lock for "no-preload-115023", held for 20.556534447s
	I1212 23:17:01.924581  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.924871  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.927873  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928206  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.928238  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928450  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929098  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929301  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929387  128156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:01.929448  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.929516  128156 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:01.929559  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.932560  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.932593  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933001  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933031  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933059  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933081  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933340  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933430  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933547  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933659  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933919  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.933923  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.934097  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.934170  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:02.029559  128156 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:02.056382  128156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:02.199375  128156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:02.207131  128156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:02.207208  128156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:02.227083  128156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:02.227111  128156 start.go:475] detecting cgroup driver to use...
	I1212 23:17:02.227174  128156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:02.241611  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:02.253610  128156 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:02.253675  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:02.266973  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:02.280712  128156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:02.406583  128156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:02.548155  128156 docker.go:219] disabling docker service ...
	I1212 23:17:02.548235  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:02.563410  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:02.575968  128156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:02.697146  128156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:02.828963  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:02.842559  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:02.865357  128156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:02.865433  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.878154  128156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:02.878231  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.892188  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.903286  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.915201  128156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:02.927665  128156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:02.938466  128156 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:02.938538  128156 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:02.954428  128156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:02.966197  128156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:03.109663  128156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:03.322982  128156 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:03.323068  128156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:03.329800  128156 start.go:543] Will wait 60s for crictl version
	I1212 23:17:03.329866  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.335779  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:03.385099  128156 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:03.385190  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.438085  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.482280  128156 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 23:17:03.483965  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:03.487086  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487464  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:03.487495  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487694  128156 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:03.492027  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:03.506463  128156 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:17:03.506503  128156 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:03.544301  128156 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 23:17:03.544329  128156 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:17:03.544386  128156 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.544441  128156 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.544474  128156 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.544440  128156 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.544509  128156 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.544527  128156 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 23:17:03.545656  128156 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.545678  128156 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.545726  128156 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.545657  128156 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.545747  128156 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.545758  128156 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.545662  128156 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 23:17:03.546098  128156 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.724978  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.727403  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.739085  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 23:17:03.747535  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.748286  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.780484  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.826808  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.834529  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.840840  128156 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 23:17:03.840893  128156 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.840940  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.868056  128156 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 23:17:03.868106  128156 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.868157  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.043948  128156 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 23:17:04.044014  128156 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.044063  128156 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 23:17:04.044102  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044167  128156 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 23:17:04.044207  128156 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.044252  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044103  128156 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.044334  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044375  128156 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 23:17:04.044401  128156 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.044444  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:04.044446  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044489  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:04.044401  128156 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 23:17:04.044520  128156 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.044545  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.065308  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.065326  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.065380  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.065495  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.065541  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.167939  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.168062  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.207196  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.207344  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.261679  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 23:17:04.261767  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:04.293250  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 23:17:04.293382  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:04.298843  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.298927  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.298960  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.299043  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.299066  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 23:17:04.299125  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:04.299187  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299201  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.299219  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299272  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.302178  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 23:17:04.302502  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 23:17:04.311377  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 23:17:04.311421  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 23:17:01.950988  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Start
	I1212 23:17:01.951206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring networks are active...
	I1212 23:17:01.952109  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network default is active
	I1212 23:17:01.952459  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network mk-default-k8s-diff-port-850839 is active
	I1212 23:17:01.953041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Getting domain xml...
	I1212 23:17:01.953769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Creating domain...
	I1212 23:17:03.377195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting to get IP...
	I1212 23:17:03.378157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378619  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378696  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.378589  129129 retry.go:31] will retry after 235.08446ms: waiting for machine to come up
	I1212 23:17:03.614763  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615258  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615288  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.615169  129129 retry.go:31] will retry after 349.415903ms: waiting for machine to come up
	I1212 23:17:03.965990  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966570  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966670  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.966628  129129 retry.go:31] will retry after 318.332956ms: waiting for machine to come up
	I1212 23:17:04.286225  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286728  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.286676  129129 retry.go:31] will retry after 554.258457ms: waiting for machine to come up
	I1212 23:17:04.843362  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843928  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843975  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.843882  129129 retry.go:31] will retry after 539.399246ms: waiting for machine to come up
	I1212 23:17:05.384807  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385237  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385267  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:05.385213  129129 retry.go:31] will retry after 793.160743ms: waiting for machine to come up
	I1212 23:17:02.653275  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125123388s)
	I1212 23:17:02.653305  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:02.888884  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.005743  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.124339  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:03.124427  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.154719  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.679193  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.179381  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.678654  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.701429  127900 api_server.go:72] duration metric: took 1.577102613s to wait for apiserver process to appear ...
	I1212 23:17:04.701456  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:04.701476  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:06.586652  128156 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.287578103s)
	I1212 23:17:06.586693  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 23:17:06.586710  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.28741029s)
	I1212 23:17:06.586731  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 23:17:06.586768  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:06.586859  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:09.053122  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.466228622s)
	I1212 23:17:09.053156  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 23:17:09.053187  128156 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:09.053239  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:06.180206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180792  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180826  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:06.180767  129129 retry.go:31] will retry after 1.183884482s: waiting for machine to come up
	I1212 23:17:07.365977  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366537  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:07.366465  129129 retry.go:31] will retry after 1.171346567s: waiting for machine to come up
	I1212 23:17:08.539985  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540457  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540493  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:08.540397  129129 retry.go:31] will retry after 1.176896883s: waiting for machine to come up
	I1212 23:17:09.718657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719110  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719142  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:09.719045  129129 retry.go:31] will retry after 2.075378734s: waiting for machine to come up
	I1212 23:17:09.703531  127900 api_server.go:269] stopped: https://192.168.61.146:8443/healthz: Get "https://192.168.61.146:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 23:17:09.703600  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:10.880325  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:10.880391  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:11.380886  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.408357  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.408420  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:11.880867  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.888735  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.888783  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:12.381393  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:12.390271  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:12.399780  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:12.399818  127900 api_server.go:131] duration metric: took 7.698353874s to wait for apiserver health ...
	I1212 23:17:12.399832  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:17:12.399842  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:12.401614  127900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:12.403088  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:12.416722  127900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:12.439451  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:12.452826  127900 system_pods.go:59] 7 kube-system pods found
	I1212 23:17:12.452870  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:12.452879  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:12.452886  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:12.452893  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Pending
	I1212 23:17:12.452901  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:12.452907  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:12.452914  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:12.452924  127900 system_pods.go:74] duration metric: took 13.446573ms to wait for pod list to return data ...
	I1212 23:17:12.452937  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:12.459638  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:12.459679  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:12.459697  127900 node_conditions.go:105] duration metric: took 6.754094ms to run NodePressure ...
	I1212 23:17:12.459722  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:12.767529  127900 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775696  127900 kubeadm.go:787] kubelet initialised
	I1212 23:17:12.775720  127900 kubeadm.go:788] duration metric: took 8.16519ms waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775730  127900 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:12.781477  127900 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.789136  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789163  127900 pod_ready.go:81] duration metric: took 7.661481ms waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.789174  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789183  127900 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.794618  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794658  127900 pod_ready.go:81] duration metric: took 5.45869ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.794671  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794689  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.801021  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801052  127900 pod_ready.go:81] duration metric: took 6.346779ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.801065  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801074  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.845211  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845243  127900 pod_ready.go:81] duration metric: took 44.152184ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.845256  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845263  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.244325  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244373  127900 pod_ready.go:81] duration metric: took 399.10083ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.244387  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244403  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.644414  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644512  127900 pod_ready.go:81] duration metric: took 400.062676ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.644545  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644566  127900 pod_ready.go:38] duration metric: took 868.822745ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:13.644601  127900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:13.674724  127900 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:13.674813  127900 kubeadm.go:640] restartCluster took 22.377904832s
	I1212 23:17:13.674838  127900 kubeadm.go:406] StartCluster complete in 22.437279451s
	I1212 23:17:13.674872  127900 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.674959  127900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:13.677846  127900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.680423  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:13.680690  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:17:13.680746  127900 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:13.680815  127900 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-549640"
	I1212 23:17:13.680839  127900 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-549640"
	W1212 23:17:13.680850  127900 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:13.680938  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.681342  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.681377  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.681658  127900 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-549640"
	I1212 23:17:13.681702  127900 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-549640"
	W1212 23:17:13.681711  127900 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:13.681780  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.682200  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.682237  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.682462  127900 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-549640"
	I1212 23:17:13.682544  127900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-549640"
	I1212 23:17:13.683062  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.683126  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.702138  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1212 23:17:13.702380  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I1212 23:17:13.702684  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702944  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702956  127900 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-549640" context rescaled to 1 replicas
	I1212 23:17:13.702990  127900 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:13.704074  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.704211  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.706640  127900 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:13.708293  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:13.706664  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706671  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706806  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I1212 23:17:13.709240  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.709383  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.709441  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.709852  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.709874  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.710209  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.710818  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.710867  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.711123  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.711765  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.711842  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.717964  127900 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-549640"
	W1212 23:17:13.717989  127900 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:13.718020  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.718447  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.718493  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.738529  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1212 23:17:13.739214  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.739827  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.739854  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.740246  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.740847  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.740917  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.747710  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I1212 23:17:13.748150  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.748772  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.748793  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.749177  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.749348  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.749413  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I1212 23:17:13.750144  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.751385  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.753201  127900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:13.754814  127900 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:13.754827  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:13.754840  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.754702  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.754893  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.756310  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.756707  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.758906  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.758937  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.758961  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.760001  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.760051  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.760288  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.763360  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.763607  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.770081  127900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:10.003107  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 23:17:10.003162  128156 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:10.003218  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:12.288548  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.285296733s)
	I1212 23:17:12.288591  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 23:17:12.288623  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:12.288674  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:13.771543  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:13.771565  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:13.769624  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I1212 23:17:13.771589  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.772282  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.772841  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.772898  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.773284  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.773451  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.775327  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.775699  127900 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:13.775713  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:13.775738  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.779093  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779539  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.779563  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779784  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.779957  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.780110  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.780255  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.787297  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.787663  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.787729  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.788010  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.789645  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.789826  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.790032  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.956110  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:13.956139  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:13.974813  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:14.024369  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:14.045961  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:14.045998  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:14.133161  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.133192  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:14.342486  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.827118  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.146649731s)
	I1212 23:17:14.827249  127900 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:14.827300  127900 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.118984074s)
	I1212 23:17:14.827324  127900 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:15.050916  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.076057269s)
	I1212 23:17:15.051030  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051049  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.051444  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.051497  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.051508  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.051517  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051527  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.053501  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.053573  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.053586  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.229413  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.229504  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.229934  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.231467  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.231489  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.522482  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.49806272s)
	I1212 23:17:15.522554  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.522574  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.522920  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.522971  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.522989  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.523009  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.523024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.523301  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.523322  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558083  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.21554598s)
	I1212 23:17:15.558173  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558200  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.558568  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.558591  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558603  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558613  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.559348  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.559370  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.559364  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.559387  127900 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-549640"
	I1212 23:17:15.562044  127900 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 23:17:11.796385  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796896  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796930  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:11.796831  129129 retry.go:31] will retry after 2.569081306s: waiting for machine to come up
	I1212 23:17:14.369090  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:14.369522  129129 retry.go:31] will retry after 3.566691604s: waiting for machine to come up
	I1212 23:17:15.563724  127900 addons.go:502] enable addons completed in 1.882971652s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 23:17:17.065214  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:15.574585  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.285870336s)
	I1212 23:17:15.574622  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 23:17:15.574667  128156 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:15.574736  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:17.937618  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938021  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938052  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:17.937984  129129 retry.go:31] will retry after 2.790781234s: waiting for machine to come up
	I1212 23:17:20.730659  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731151  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731179  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:20.731128  129129 retry.go:31] will retry after 5.345575973s: waiting for machine to come up
	I1212 23:17:19.564344  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:21.564330  127900 node_ready.go:49] node "old-k8s-version-549640" has status "Ready":"True"
	I1212 23:17:21.564356  127900 node_ready.go:38] duration metric: took 6.737022414s waiting for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:21.564367  127900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:21.569573  127900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:19.606668  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.031891087s)
	I1212 23:17:19.606701  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 23:17:19.606731  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:19.606791  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:21.765860  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.159035751s)
	I1212 23:17:21.765896  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 23:17:21.765934  128156 cache_images.go:123] Successfully loaded all cached images
	I1212 23:17:21.765944  128156 cache_images.go:92] LoadImages completed in 18.221602939s
	I1212 23:17:21.766033  128156 ssh_runner.go:195] Run: crio config
	I1212 23:17:21.818966  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:21.818996  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:21.819021  128156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:21.819048  128156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-115023 NodeName:no-preload-115023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:21.819220  128156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-115023"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:21.819310  128156 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-115023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:21.819369  128156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 23:17:21.829605  128156 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:21.829690  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:21.838518  128156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1212 23:17:21.854214  128156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 23:17:21.869927  128156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1212 23:17:21.886723  128156 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:21.890481  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:21.902964  128156 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023 for IP: 192.168.72.32
	I1212 23:17:21.902993  128156 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:21.903156  128156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:21.903194  128156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:21.903275  128156 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.key
	I1212 23:17:21.903357  128156 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key.9d394d40
	I1212 23:17:21.903393  128156 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key
	I1212 23:17:21.903509  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:21.903540  128156 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:21.903550  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:21.903583  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:21.903623  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:21.903647  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:21.903687  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:21.904310  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:21.928095  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:17:21.951412  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:21.974936  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:21.997877  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:22.020598  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:22.042859  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:22.065941  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:22.088688  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:22.110493  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:22.132736  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:22.154394  128156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:22.170427  128156 ssh_runner.go:195] Run: openssl version
	I1212 23:17:22.176106  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:22.186617  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191355  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191423  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.196989  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:22.208456  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:22.219355  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224154  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224224  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.230069  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:22.240929  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:22.251836  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256441  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256496  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.261952  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:22.272452  128156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:22.277105  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:22.283114  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:22.288860  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:22.294416  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:22.300148  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:22.306380  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:22.316419  128156 kubeadm.go:404] StartCluster: {Name:no-preload-115023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:22.316550  128156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:22.316623  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:22.358616  128156 cri.go:89] found id: ""
	I1212 23:17:22.358703  128156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:22.368800  128156 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:22.368823  128156 kubeadm.go:636] restartCluster start
	I1212 23:17:22.368883  128156 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:22.378570  128156 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.380161  128156 kubeconfig.go:92] found "no-preload-115023" server: "https://192.168.72.32:8443"
	I1212 23:17:22.383451  128156 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:22.392995  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.393064  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.405318  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.405337  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.405370  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.416721  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.917468  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.917571  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.929995  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.417616  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.417752  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.430907  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.917522  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.917607  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.929655  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:24.417316  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.417427  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.429590  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.436348  127760 start.go:369] acquired machines lock for "embed-certs-809120" in 1m2.018372087s
	I1212 23:17:27.436407  127760 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:27.436418  127760 fix.go:54] fixHost starting: 
	I1212 23:17:27.436818  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:27.436856  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:27.453079  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I1212 23:17:27.453449  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:27.453967  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:17:27.453999  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:27.454365  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:27.454580  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:27.454743  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:17:27.456367  127760 fix.go:102] recreateIfNeeded on embed-certs-809120: state=Stopped err=<nil>
	I1212 23:17:27.456395  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	W1212 23:17:27.456549  127760 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:27.458402  127760 out.go:177] * Restarting existing kvm2 VM for "embed-certs-809120" ...
	I1212 23:17:23.588762  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:26.087305  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:27.459818  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Start
	I1212 23:17:27.459994  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring networks are active...
	I1212 23:17:27.460587  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network default is active
	I1212 23:17:27.460997  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network mk-embed-certs-809120 is active
	I1212 23:17:27.461361  127760 main.go:141] libmachine: (embed-certs-809120) Getting domain xml...
	I1212 23:17:27.462026  127760 main.go:141] libmachine: (embed-certs-809120) Creating domain...
	I1212 23:17:26.081099  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Found IP for machine: 192.168.39.180
	I1212 23:17:26.081626  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has current primary IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081637  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserving static IP address...
	I1212 23:17:26.082029  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserved static IP address: 192.168.39.180
	I1212 23:17:26.082080  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.082119  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for SSH to be available...
	I1212 23:17:26.082157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | skip adding static IP to network mk-default-k8s-diff-port-850839 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"}
	I1212 23:17:26.082182  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Getting to WaitForSSH function...
	I1212 23:17:26.084444  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.084803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084864  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH client type: external
	I1212 23:17:26.084925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa (-rw-------)
	I1212 23:17:26.084971  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:26.084992  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | About to run SSH command:
	I1212 23:17:26.085006  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | exit 0
	I1212 23:17:26.175122  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:26.175455  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetConfigRaw
	I1212 23:17:26.176092  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.178747  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179016  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.179044  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179388  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:17:26.179602  128282 machine.go:88] provisioning docker machine ...
	I1212 23:17:26.179624  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:26.179853  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180033  128282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850839"
	I1212 23:17:26.180051  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180209  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.182470  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.182812  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.182848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.183003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.183193  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183374  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183538  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.183709  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.184100  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.184115  128282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850839 && echo "default-k8s-diff-port-850839" | sudo tee /etc/hostname
	I1212 23:17:26.313520  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850839
	
	I1212 23:17:26.313562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.316848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.317633  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317817  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.318047  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318229  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318344  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.318567  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.318888  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.318907  128282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850839/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:26.443174  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:26.443206  128282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:26.443224  128282 buildroot.go:174] setting up certificates
	I1212 23:17:26.443255  128282 provision.go:83] configureAuth start
	I1212 23:17:26.443273  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.443628  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.446155  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446467  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.446501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446568  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.449661  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450005  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.450041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450170  128282 provision.go:138] copyHostCerts
	I1212 23:17:26.450235  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:26.450258  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:26.450330  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:26.450442  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:26.450453  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:26.450483  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:26.450555  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:26.450565  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:26.450592  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:26.450656  128282 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850839 san=[192.168.39.180 192.168.39.180 localhost 127.0.0.1 minikube default-k8s-diff-port-850839]
	I1212 23:17:26.688969  128282 provision.go:172] copyRemoteCerts
	I1212 23:17:26.689035  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:26.689060  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.691731  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692004  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.692033  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692207  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.692441  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.692607  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.692736  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:26.781407  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:26.804712  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 23:17:26.827036  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:26.848977  128282 provision.go:86] duration metric: configureAuth took 405.706324ms
	I1212 23:17:26.849006  128282 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:26.849214  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:26.849310  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.851925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852281  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.852314  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852486  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.852679  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.852860  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.853003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.853172  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.853688  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.853711  128282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:27.183932  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:27.183961  128282 machine.go:91] provisioned docker machine in 1.004345653s
	I1212 23:17:27.183972  128282 start.go:300] post-start starting for "default-k8s-diff-port-850839" (driver="kvm2")
	I1212 23:17:27.183982  128282 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:27.183999  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.184348  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:27.184398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.187375  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187709  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.187759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187858  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.188054  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.188248  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.188400  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.277858  128282 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:27.282128  128282 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:27.282157  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:27.282244  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:27.282368  128282 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:27.282481  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:27.291755  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:27.313541  128282 start.go:303] post-start completed in 129.554425ms
	I1212 23:17:27.313563  128282 fix.go:56] fixHost completed within 25.388839079s
	I1212 23:17:27.313586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.316388  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316737  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.316760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316934  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.317141  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317343  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317540  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.317789  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:27.318143  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:27.318158  128282 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:27.436207  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423047.383892438
	
	I1212 23:17:27.436230  128282 fix.go:206] guest clock: 1702423047.383892438
	I1212 23:17:27.436237  128282 fix.go:219] Guest: 2023-12-12 23:17:27.383892438 +0000 UTC Remote: 2023-12-12 23:17:27.313567546 +0000 UTC m=+296.357388926 (delta=70.324892ms)
	I1212 23:17:27.436261  128282 fix.go:190] guest clock delta is within tolerance: 70.324892ms
	I1212 23:17:27.436266  128282 start.go:83] releasing machines lock for "default-k8s-diff-port-850839", held for 25.511577503s
	I1212 23:17:27.436289  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.436571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:27.439315  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439697  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.439730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440396  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440660  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440741  128282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:27.440793  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.440873  128282 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:27.440891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.443558  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443880  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443938  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.443965  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444132  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444338  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444369  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.444398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444741  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.444788  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444907  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.445073  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.528730  128282 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:27.563590  128282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:27.715220  128282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:27.722775  128282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:27.722883  128282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:27.743217  128282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:27.743264  128282 start.go:475] detecting cgroup driver to use...
	I1212 23:17:27.743344  128282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:27.759125  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:27.772532  128282 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:27.772602  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:27.786439  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:27.800413  128282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:27.905626  128282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:28.037279  128282 docker.go:219] disabling docker service ...
	I1212 23:17:28.037362  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:28.050670  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:28.063551  128282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:28.195512  128282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:28.306881  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:28.324506  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:28.344908  128282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:28.344992  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.354788  128282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:28.354883  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.364157  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.373415  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.383391  128282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:28.393203  128282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:28.401935  128282 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:28.402006  128282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:28.413618  128282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:28.426007  128282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:28.536725  128282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:28.711815  128282 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:28.711892  128282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:28.717242  128282 start.go:543] Will wait 60s for crictl version
	I1212 23:17:28.717306  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:17:28.724383  128282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:28.779687  128282 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:28.779781  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.834147  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.894131  128282 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:24.917347  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.917438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.928690  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.417259  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.417343  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.428544  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.917136  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.917212  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.927813  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.417826  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.417917  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.428147  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.917724  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.917803  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.929515  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.416997  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.417102  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.428180  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.917712  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.917830  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.931264  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.417370  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.417479  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.432478  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.916907  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.917039  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.932698  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:29.416883  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.416989  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.434138  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.895767  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:28.898899  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899233  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:28.899276  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899500  128282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:28.903950  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:28.917270  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:28.917383  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:28.956752  128282 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:28.956832  128282 ssh_runner.go:195] Run: which lz4
	I1212 23:17:28.961387  128282 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:28.965850  128282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:28.965925  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:30.869493  128282 crio.go:444] Took 1.908152 seconds to copy over tarball
	I1212 23:17:30.869580  128282 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:28.610279  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:31.088625  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:28.873664  127760 main.go:141] libmachine: (embed-certs-809120) Waiting to get IP...
	I1212 23:17:28.874489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:28.874895  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:28.874992  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:28.874848  129329 retry.go:31] will retry after 244.313261ms: waiting for machine to come up
	I1212 23:17:29.120442  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.120959  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.120997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.120852  129329 retry.go:31] will retry after 369.234988ms: waiting for machine to come up
	I1212 23:17:29.491516  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.492081  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.492124  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.492035  129329 retry.go:31] will retry after 448.746179ms: waiting for machine to come up
	I1212 23:17:29.942643  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.943286  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.943319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.943229  129329 retry.go:31] will retry after 520.98965ms: waiting for machine to come up
	I1212 23:17:30.465955  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:30.466468  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:30.466503  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:30.466430  129329 retry.go:31] will retry after 617.123622ms: waiting for machine to come up
	I1212 23:17:31.085159  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.085706  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.085746  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.085665  129329 retry.go:31] will retry after 853.539861ms: waiting for machine to come up
	I1212 23:17:31.940795  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.941240  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.941265  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.941169  129329 retry.go:31] will retry after 960.346145ms: waiting for machine to come up
	I1212 23:17:29.916897  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.917007  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.932055  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.417555  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.417657  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.433218  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.917841  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.917967  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.933255  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.417271  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.417357  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.429192  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.917804  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.917908  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.930333  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:32.393106  128156 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:32.393209  128156 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:32.393228  128156 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:32.393315  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:32.445688  128156 cri.go:89] found id: ""
	I1212 23:17:32.445774  128156 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:32.462269  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:32.473687  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:32.473768  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483043  128156 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483075  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:32.656758  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.442637  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.666131  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.751061  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.855861  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:33.855952  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:33.879438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.403317  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.178083  128282 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.308463726s)
	I1212 23:17:34.178124  128282 crio.go:451] Took 3.308601 seconds to extract the tarball
	I1212 23:17:34.178136  128282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:34.219740  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:34.268961  128282 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:34.268987  128282 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:34.269051  128282 ssh_runner.go:195] Run: crio config
	I1212 23:17:34.326979  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:34.327007  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:34.327033  128282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:34.327060  128282 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850839 NodeName:default-k8s-diff-port-850839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:34.327252  128282 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:34.327353  128282 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 23:17:34.327425  128282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:34.338300  128282 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:34.338385  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:34.347329  128282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 23:17:34.364120  128282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:34.380374  128282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 23:17:34.398219  128282 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:34.402134  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:34.415197  128282 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839 for IP: 192.168.39.180
	I1212 23:17:34.415252  128282 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:34.415436  128282 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:34.415472  128282 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:34.415540  128282 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.key
	I1212 23:17:34.415593  128282 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key.66237cde
	I1212 23:17:34.415626  128282 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key
	I1212 23:17:34.415739  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:34.415780  128282 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:34.415793  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:34.415841  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:34.415886  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:34.415931  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:34.415990  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:34.416632  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:34.440783  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:34.466303  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:34.491267  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:17:34.516659  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:34.542472  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:34.569367  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:34.599627  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:34.628781  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:34.655361  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:34.681199  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:34.706068  128282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:34.724142  128282 ssh_runner.go:195] Run: openssl version
	I1212 23:17:34.730108  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:34.740221  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745118  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745203  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.751091  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:34.761120  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:34.771456  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776480  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776559  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.782833  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:34.793597  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:34.804519  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809767  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809831  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.815838  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:34.825967  128282 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:34.831487  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:34.838280  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:34.845663  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:34.854810  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:34.862962  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:34.869641  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:34.876373  128282 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:34.876509  128282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:34.876579  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:34.918413  128282 cri.go:89] found id: ""
	I1212 23:17:34.918486  128282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:34.928267  128282 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:34.928305  128282 kubeadm.go:636] restartCluster start
	I1212 23:17:34.928396  128282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:34.938202  128282 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.939397  128282 kubeconfig.go:92] found "default-k8s-diff-port-850839" server: "https://192.168.39.180:8444"
	I1212 23:17:34.941945  128282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:34.953458  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.953552  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.965537  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.965561  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.965623  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.977454  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.478209  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.478292  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.505825  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.978537  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.978615  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.991422  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:33.591861  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:35.629760  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:32.902889  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:32.903556  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:32.903588  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:32.903500  129329 retry.go:31] will retry after 1.225619987s: waiting for machine to come up
	I1212 23:17:34.130560  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:34.131066  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:34.131098  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:34.131009  129329 retry.go:31] will retry after 1.544530633s: waiting for machine to come up
	I1212 23:17:35.677455  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:35.677916  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:35.677939  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:35.677902  129329 retry.go:31] will retry after 1.740004665s: waiting for machine to come up
	I1212 23:17:37.419743  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:37.420167  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:37.420203  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:37.420121  129329 retry.go:31] will retry after 2.220250897s: waiting for machine to come up
	I1212 23:17:34.902923  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.402835  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.903269  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.403728  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.903298  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.403775  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.903663  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.403892  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.429370  128156 api_server.go:72] duration metric: took 4.573508338s to wait for apiserver process to appear ...
	I1212 23:17:38.429402  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:38.429424  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.429952  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.430019  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.430455  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.931234  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:36.478240  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.478317  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.494437  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:36.978574  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.978654  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.995711  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.478404  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.478484  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.492356  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.977979  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.978123  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.993637  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.478102  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.478227  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.494347  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.977645  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.977771  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.994288  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.477795  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.477942  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.495986  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.978587  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.978695  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.994551  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.477958  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.478056  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.492956  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.978560  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.978663  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.994199  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.089524  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:40.591793  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:39.643094  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:39.643562  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:39.643603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:39.643508  129329 retry.go:31] will retry after 2.987735855s: waiting for machine to come up
	I1212 23:17:42.633477  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:42.633958  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:42.633993  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:42.633907  129329 retry.go:31] will retry after 3.131576961s: waiting for machine to come up
	I1212 23:17:41.334632  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:41.334685  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:41.334703  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.392719  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.392768  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.431413  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.445393  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.445428  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.930605  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.935880  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.935918  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.430551  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.435690  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:42.435720  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.931341  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.936295  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:17:42.944125  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:17:42.944163  128156 api_server.go:131] duration metric: took 4.514753942s to wait for apiserver health ...
	I1212 23:17:42.944173  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:42.944179  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:42.945951  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:42.947258  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:42.957745  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:42.978269  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:42.990231  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:42.990267  128156 system_pods.go:61] "coredns-76f75df574-2rdhr" [266c2440-a927-476c-b918-d0712834fc2f] Running
	I1212 23:17:42.990274  128156 system_pods.go:61] "etcd-no-preload-115023" [522ee237-12e0-4b83-9e20-05713cd87c7d] Running
	I1212 23:17:42.990281  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [9048886a-1b8b-407d-bd71-c5a850d88a5f] Running
	I1212 23:17:42.990287  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [4652e03f-2622-41d8-8791-bcc648d43432] Running
	I1212 23:17:42.990292  128156 system_pods.go:61] "kube-proxy-rqhmc" [b7514603-3389-4a38-b24a-e9c7948364bc] Running
	I1212 23:17:42.990299  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [7ce16391-9627-454b-b0de-27af47921997] Running
	I1212 23:17:42.990308  128156 system_pods.go:61] "metrics-server-57f55c9bc5-b42rv" [f27bd873-340b-4ae1-922a-ed8f52d558dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:42.990316  128156 system_pods.go:61] "storage-provisioner" [d9565f7f-dcf4-4e4d-88fd-e8a54bbf0e40] Running
	I1212 23:17:42.990327  128156 system_pods.go:74] duration metric: took 12.031472ms to wait for pod list to return data ...
	I1212 23:17:42.990347  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:42.994787  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:42.994817  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:42.994827  128156 node_conditions.go:105] duration metric: took 4.471497ms to run NodePressure ...
	I1212 23:17:42.994844  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.281299  128156 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:43.286299  128156 retry.go:31] will retry after 184.15509ms: kubelet not initialised
	I1212 23:17:43.476354  128156 retry.go:31] will retry after 533.806598ms: kubelet not initialised
	I1212 23:17:44.036349  128156 retry.go:31] will retry after 483.473669ms: kubelet not initialised
	I1212 23:17:41.477798  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.477898  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.493963  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:41.977991  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.978077  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.994590  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.478242  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.478334  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.495374  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.978495  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.978597  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.992337  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.477604  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.477667  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.491061  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.977638  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.977754  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.991654  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.478308  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:44.478409  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:44.494965  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.953708  128282 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:44.953763  128282 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:44.953780  128282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:44.953874  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:45.003440  128282 cri.go:89] found id: ""
	I1212 23:17:45.003519  128282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:45.021471  128282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:45.036134  128282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:45.036203  128282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049188  128282 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049214  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.197549  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.958707  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.088583  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.587947  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:47.588918  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.768814  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:45.769238  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:45.769270  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:45.769171  129329 retry.go:31] will retry after 3.722952815s: waiting for machine to come up
	I1212 23:17:44.529285  128156 kubeadm.go:787] kubelet initialised
	I1212 23:17:44.529310  128156 kubeadm.go:788] duration metric: took 1.247981757s waiting for restarted kubelet to initialise ...
	I1212 23:17:44.529321  128156 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:44.551751  128156 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:46.588427  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:48.589582  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:46.161702  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.251040  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.344286  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:46.344385  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.359646  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.875339  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.375793  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.875532  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.375394  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.875412  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.903144  128282 api_server.go:72] duration metric: took 2.558861066s to wait for apiserver process to appear ...
	I1212 23:17:48.903170  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:48.903188  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.903660  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:48.903697  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.904122  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:49.404880  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:50.088813  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.089208  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:49.494927  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495446  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has current primary IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495474  127760 main.go:141] libmachine: (embed-certs-809120) Found IP for machine: 192.168.50.221
	I1212 23:17:49.495489  127760 main.go:141] libmachine: (embed-certs-809120) Reserving static IP address...
	I1212 23:17:49.495884  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.495933  127760 main.go:141] libmachine: (embed-certs-809120) DBG | skip adding static IP to network mk-embed-certs-809120 - found existing host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"}
	I1212 23:17:49.495954  127760 main.go:141] libmachine: (embed-certs-809120) Reserved static IP address: 192.168.50.221
	I1212 23:17:49.495971  127760 main.go:141] libmachine: (embed-certs-809120) Waiting for SSH to be available...
	I1212 23:17:49.495987  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Getting to WaitForSSH function...
	I1212 23:17:49.498007  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498362  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.498398  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498514  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH client type: external
	I1212 23:17:49.498545  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa (-rw-------)
	I1212 23:17:49.498583  127760 main.go:141] libmachine: (embed-certs-809120) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:49.498598  127760 main.go:141] libmachine: (embed-certs-809120) DBG | About to run SSH command:
	I1212 23:17:49.498615  127760 main.go:141] libmachine: (embed-certs-809120) DBG | exit 0
	I1212 23:17:49.635655  127760 main.go:141] libmachine: (embed-certs-809120) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:49.636039  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetConfigRaw
	I1212 23:17:49.636795  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.639601  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640032  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.640059  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640367  127760 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/config.json ...
	I1212 23:17:49.640604  127760 machine.go:88] provisioning docker machine ...
	I1212 23:17:49.640629  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:49.640901  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641044  127760 buildroot.go:166] provisioning hostname "embed-certs-809120"
	I1212 23:17:49.641066  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641184  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.643599  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644050  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.644082  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644210  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.644439  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644612  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644791  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.644961  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.645333  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.645350  127760 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-809120 && echo "embed-certs-809120" | sudo tee /etc/hostname
	I1212 23:17:49.779263  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-809120
	
	I1212 23:17:49.779298  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.782329  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782739  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.782772  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782891  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.783133  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783306  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783466  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.783641  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.784029  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.784055  127760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-809120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-809120/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-809120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:49.914603  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:49.914641  127760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:49.914673  127760 buildroot.go:174] setting up certificates
	I1212 23:17:49.914686  127760 provision.go:83] configureAuth start
	I1212 23:17:49.914704  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.915021  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.918281  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918661  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.918715  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918849  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.921184  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921566  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.921603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921732  127760 provision.go:138] copyHostCerts
	I1212 23:17:49.921811  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:49.921824  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:49.921891  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:49.922013  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:49.922030  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:49.922061  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:49.922139  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:49.922149  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:49.922174  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:49.922255  127760 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-809120 san=[192.168.50.221 192.168.50.221 localhost 127.0.0.1 minikube embed-certs-809120]
	I1212 23:17:50.309293  127760 provision.go:172] copyRemoteCerts
	I1212 23:17:50.309361  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:50.309389  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.312319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.312745  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312942  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.313157  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.313362  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.313554  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.401075  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:50.426930  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 23:17:50.452785  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:50.480062  127760 provision.go:86] duration metric: configureAuth took 565.356144ms
	I1212 23:17:50.480098  127760 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:50.480377  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:50.480523  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.483621  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484035  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.484091  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484244  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.484455  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484603  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484728  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.484903  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.485264  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.485282  127760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:50.842779  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:50.842815  127760 machine.go:91] provisioned docker machine in 1.202192917s
	I1212 23:17:50.842831  127760 start.go:300] post-start starting for "embed-certs-809120" (driver="kvm2")
	I1212 23:17:50.842846  127760 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:50.842882  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:50.843282  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:50.843318  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.846267  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846670  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.846704  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846881  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.847102  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.847322  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.847496  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.934904  127760 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:50.939875  127760 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:50.939912  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:50.940000  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:50.940130  127760 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:50.940242  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:50.950764  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:50.977204  127760 start.go:303] post-start completed in 134.34972ms
	I1212 23:17:50.977232  127760 fix.go:56] fixHost completed within 23.540815255s
	I1212 23:17:50.977256  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.980553  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981029  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.981065  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981350  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.981611  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981766  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981917  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.982111  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.982448  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.982467  127760 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:51.096273  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423071.035304579
	
	I1212 23:17:51.096303  127760 fix.go:206] guest clock: 1702423071.035304579
	I1212 23:17:51.096311  127760 fix.go:219] Guest: 2023-12-12 23:17:51.035304579 +0000 UTC Remote: 2023-12-12 23:17:50.977236465 +0000 UTC m=+368.149225502 (delta=58.068114ms)
	I1212 23:17:51.096365  127760 fix.go:190] guest clock delta is within tolerance: 58.068114ms
	I1212 23:17:51.096375  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 23.659994787s
	I1212 23:17:51.096401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.096676  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:51.099275  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099683  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.099714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099864  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100586  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100671  127760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:51.100713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.100833  127760 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:51.100859  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.103808  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104103  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104214  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104268  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104379  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104415  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104405  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104615  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104620  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104817  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.104999  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.105058  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.105220  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.214734  127760 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:51.221556  127760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:51.379699  127760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:51.386319  127760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:51.386411  127760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:51.406594  127760 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:51.406623  127760 start.go:475] detecting cgroup driver to use...
	I1212 23:17:51.406707  127760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:51.421646  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:51.439574  127760 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:51.439651  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:51.456389  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:51.469380  127760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:51.576093  127760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:51.711468  127760 docker.go:219] disabling docker service ...
	I1212 23:17:51.711548  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:51.726747  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:51.739661  127760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:51.852974  127760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:51.973603  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:51.986983  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:52.004739  127760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:52.004809  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.017255  127760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:52.017345  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.029275  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.040398  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.051671  127760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:52.062036  127760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:52.070879  127760 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:52.070958  127760 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:52.087878  127760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:52.099487  127760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:52.246621  127760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:52.445182  127760 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:52.445259  127760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:52.450378  127760 start.go:543] Will wait 60s for crictl version
	I1212 23:17:52.450458  127760 ssh_runner.go:195] Run: which crictl
	I1212 23:17:52.454778  127760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:52.497569  127760 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:52.497679  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.562042  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.622289  127760 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:52.623892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:52.626997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627438  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:52.627474  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627731  127760 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:52.633387  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:52.647682  127760 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:52.647763  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:52.691061  127760 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:52.691138  127760 ssh_runner.go:195] Run: which lz4
	I1212 23:17:52.695575  127760 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:52.701228  127760 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:52.701265  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:53.042479  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.042516  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.042532  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.134475  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.134511  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.404943  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.413791  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.413829  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:53.904341  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.916515  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.916564  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:54.404229  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:54.414091  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:17:54.428577  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:17:54.428615  128282 api_server.go:131] duration metric: took 5.525437271s to wait for apiserver health ...
	I1212 23:17:54.428628  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:54.428638  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:54.430838  128282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:50.589742  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.593395  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:54.432405  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:54.450147  128282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:54.496866  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:54.519276  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:54.519327  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:17:54.519339  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:17:54.519354  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:17:54.519405  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:17:54.519418  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:17:54.519434  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:17:54.519447  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:54.519484  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:17:54.519498  128282 system_pods.go:74] duration metric: took 22.603103ms to wait for pod list to return data ...
	I1212 23:17:54.519512  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:54.526046  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:54.526083  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:54.526098  128282 node_conditions.go:105] duration metric: took 6.575834ms to run NodePressure ...
	I1212 23:17:54.526127  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:54.979886  128282 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991132  128282 kubeadm.go:787] kubelet initialised
	I1212 23:17:54.991169  128282 kubeadm.go:788] duration metric: took 11.248765ms waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991185  128282 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:54.999550  128282 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.008465  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008494  128282 pod_ready.go:81] duration metric: took 8.904589ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.008508  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008516  128282 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.020120  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020152  128282 pod_ready.go:81] duration metric: took 11.625987ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.020164  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020191  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.030018  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030056  128282 pod_ready.go:81] duration metric: took 9.856172ms waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.030074  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030083  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.039957  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.039997  128282 pod_ready.go:81] duration metric: took 9.902972ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.040015  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.040025  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.384922  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384964  128282 pod_ready.go:81] duration metric: took 344.925878ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.384979  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384988  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.791268  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791307  128282 pod_ready.go:81] duration metric: took 406.306307ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.791323  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791335  128282 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:56.186386  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186484  128282 pod_ready.go:81] duration metric: took 395.136012ms waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:56.186514  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186553  128282 pod_ready.go:38] duration metric: took 1.195355612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:56.186577  128282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:56.201434  128282 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:56.201462  128282 kubeadm.go:640] restartCluster took 21.273148264s
	I1212 23:17:56.201473  128282 kubeadm.go:406] StartCluster complete in 21.325115034s
	I1212 23:17:56.201496  128282 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.201592  128282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:56.204683  128282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.205095  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:56.205222  128282 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:56.205300  128282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205321  128282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205330  128282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205346  128282 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205361  128282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850839"
	W1212 23:17:56.205363  128282 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:56.205324  128282 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205445  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205360  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 23:17:56.205501  128282 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:56.205595  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205832  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205855  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205918  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205939  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205978  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.206077  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.215695  128282 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850839" context rescaled to 1 replicas
	I1212 23:17:56.215745  128282 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:56.219003  128282 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:56.221363  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.223684  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I1212 23:17:56.223901  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I1212 23:17:56.224018  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I1212 23:17:56.224530  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.224610  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225015  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225250  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.225260  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.225597  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.225990  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.226015  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.226308  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.226318  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.227368  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.227535  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.229799  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.229817  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.230427  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.232575  128282 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-850839"
	W1212 23:17:56.232593  128282 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:56.232623  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.233075  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233110  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.233880  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233930  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.245636  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1212 23:17:56.246119  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.246606  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.246623  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.246950  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.247098  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.248959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.251159  128282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:56.249918  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I1212 23:17:56.251294  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I1212 23:17:56.252768  128282 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.252783  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:56.252798  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.253647  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.253753  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.254219  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254233  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254340  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254347  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254690  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254749  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.255310  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.255335  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.256017  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256612  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.256639  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.257003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.257189  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.257402  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.258242  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.260097  128282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:54.115994  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:55.606824  127900 pod_ready.go:92] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.606858  127900 pod_ready.go:81] duration metric: took 34.03725266s waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.606872  127900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619163  127900 pod_ready.go:92] pod "etcd-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.619197  127900 pod_ready.go:81] duration metric: took 12.316097ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619212  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627282  127900 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.627313  127900 pod_ready.go:81] duration metric: took 8.08913ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627328  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634928  127900 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.634962  127900 pod_ready.go:81] duration metric: took 7.625067ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634978  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644531  127900 pod_ready.go:92] pod "kube-proxy-b6lz6" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.644558  127900 pod_ready.go:81] duration metric: took 9.571853ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644572  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985318  127900 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.985350  127900 pod_ready.go:81] duration metric: took 340.769789ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985366  127900 pod_ready.go:38] duration metric: took 34.420989087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:55.985382  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:55.985443  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:56.008913  127900 api_server.go:72] duration metric: took 42.305439195s to wait for apiserver process to appear ...
	I1212 23:17:56.009000  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:56.009030  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:56.017005  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:56.018170  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:56.018198  127900 api_server.go:131] duration metric: took 9.18267ms to wait for apiserver health ...
	I1212 23:17:56.018209  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:56.189360  127900 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:56.189394  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.189401  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.189408  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.189415  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.189421  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.189428  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.189437  127900 system_pods.go:61] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.189447  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.189462  127900 system_pods.go:74] duration metric: took 171.24435ms to wait for pod list to return data ...
	I1212 23:17:56.189477  127900 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:17:56.386180  127900 default_sa.go:45] found service account: "default"
	I1212 23:17:56.386211  127900 default_sa.go:55] duration metric: took 196.72345ms for default service account to be created ...
	I1212 23:17:56.386223  127900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:17:56.591313  127900 system_pods.go:86] 8 kube-system pods found
	I1212 23:17:56.591345  127900 system_pods.go:89] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.591354  127900 system_pods.go:89] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.591361  127900 system_pods.go:89] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.591369  127900 system_pods.go:89] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.591375  127900 system_pods.go:89] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.591382  127900 system_pods.go:89] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.591393  127900 system_pods.go:89] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.591401  127900 system_pods.go:89] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.591414  127900 system_pods.go:126] duration metric: took 205.183283ms to wait for k8s-apps to be running ...
	I1212 23:17:56.591429  127900 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:17:56.591482  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.611938  127900 system_svc.go:56] duration metric: took 20.493956ms WaitForService to wait for kubelet.
	I1212 23:17:56.611982  127900 kubeadm.go:581] duration metric: took 42.908516938s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:17:56.612014  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:56.785799  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:56.785841  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:56.785856  127900 node_conditions.go:105] duration metric: took 173.834506ms to run NodePressure ...
	I1212 23:17:56.785874  127900 start.go:228] waiting for startup goroutines ...
	I1212 23:17:56.785883  127900 start.go:233] waiting for cluster config update ...
	I1212 23:17:56.785898  127900 start.go:242] writing updated cluster config ...
	I1212 23:17:56.786402  127900 ssh_runner.go:195] Run: rm -f paused
	I1212 23:17:56.860461  127900 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 23:17:56.862646  127900 out.go:177] 
	W1212 23:17:56.864213  127900 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 23:17:56.865656  127900 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 23:17:56.867482  127900 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-549640" cluster and "default" namespace by default
	I1212 23:17:54.719978  127760 crio.go:444] Took 2.024442 seconds to copy over tarball
	I1212 23:17:54.720063  127760 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:56.261553  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:56.261577  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:56.261599  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.269093  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269478  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.269501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269778  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.269969  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.270192  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.270348  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.273173  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I1212 23:17:56.273551  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.274146  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.274170  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.274479  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.274657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.276224  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.276536  128282 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.276553  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:56.276572  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.279571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.279991  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.280030  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.280183  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.280395  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.280562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.280708  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.399444  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.447026  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:56.447058  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:56.453920  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.474280  128282 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:56.474316  128282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:17:56.509564  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:56.509598  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:56.575180  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:56.575217  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:56.641478  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:58.298873  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.89938362s)
	I1212 23:17:58.298942  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.298948  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.844991558s)
	I1212 23:17:58.298957  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.298986  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299063  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299326  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299356  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299367  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299387  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299439  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299448  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299463  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299479  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299489  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299673  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299690  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299850  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299879  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299899  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.308876  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.308905  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.309195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.309232  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.309241  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.418788  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.777244462s)
	I1212 23:17:58.418849  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.418866  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.419251  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.419285  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.419297  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.419308  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.420803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.420837  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.420857  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.420876  128282 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:58.591048  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:58.635345  128282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:17:54.595102  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:57.089235  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:58.815643  128282 addons.go:502] enable addons completed in 2.610454188s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:17:58.247448  127760 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.527350021s)
	I1212 23:17:58.247482  127760 crio.go:451] Took 3.527472 seconds to extract the tarball
	I1212 23:17:58.247500  127760 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:58.292239  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:58.347669  127760 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:58.347700  127760 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:58.347774  127760 ssh_runner.go:195] Run: crio config
	I1212 23:17:58.410577  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:17:58.410604  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:58.410627  127760 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:58.410658  127760 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-809120 NodeName:embed-certs-809120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:58.410874  127760 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-809120"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:58.410973  127760 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-809120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:58.411040  127760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:58.422571  127760 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:58.422655  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:58.432833  127760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:17:58.449996  127760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:58.468807  127760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 23:17:58.487568  127760 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:58.492547  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:58.505497  127760 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120 for IP: 192.168.50.221
	I1212 23:17:58.505548  127760 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:58.505759  127760 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:58.505820  127760 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:58.505891  127760 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/client.key
	I1212 23:17:58.585996  127760 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key.edab0817
	I1212 23:17:58.586114  127760 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key
	I1212 23:17:58.586288  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:58.586319  127760 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:58.586330  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:58.586356  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:58.586381  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:58.586418  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:58.586483  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:58.587254  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:58.615215  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:58.644237  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:58.670345  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:58.694986  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:58.719944  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:58.744701  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:58.768614  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:58.792922  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:58.815723  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:58.840192  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:58.864277  127760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:58.883069  127760 ssh_runner.go:195] Run: openssl version
	I1212 23:17:58.889642  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:58.901893  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906910  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906964  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.912769  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:58.924171  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:58.937368  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942604  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942681  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.948759  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:58.959757  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:58.971091  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976035  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976105  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.982246  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:58.994786  127760 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:58.999625  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:59.006233  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:59.012668  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:59.018959  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:59.025039  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:59.031628  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:59.037633  127760 kubeadm.go:404] StartCluster: {Name:embed-certs-809120 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:59.037779  127760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:59.037837  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:59.078977  127760 cri.go:89] found id: ""
	I1212 23:17:59.079065  127760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:59.090869  127760 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:59.090893  127760 kubeadm.go:636] restartCluster start
	I1212 23:17:59.090957  127760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:59.101950  127760 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.103088  127760 kubeconfig.go:92] found "embed-certs-809120" server: "https://192.168.50.221:8443"
	I1212 23:17:59.105562  127760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:59.115942  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.116006  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.128428  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.128452  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.128508  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.141075  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.641778  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.641854  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.654519  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.142171  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.142275  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.157160  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.641601  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.641719  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.654666  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.141184  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.141289  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.154899  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.641381  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.641501  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.654663  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.141186  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.141311  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.154140  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.642051  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.642157  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.655013  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.586733  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.588383  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:03.588956  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.092631  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:03.591508  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:04.090728  128282 node_ready.go:49] node "default-k8s-diff-port-850839" has status "Ready":"True"
	I1212 23:18:04.090757  128282 node_ready.go:38] duration metric: took 7.616412902s waiting for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:18:04.090775  128282 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:04.099347  128282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107155  128282 pod_ready.go:92] pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.107180  128282 pod_ready.go:81] duration metric: took 7.807715ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107192  128282 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113524  128282 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.113547  128282 pod_ready.go:81] duration metric: took 6.348789ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113557  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:03.141560  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.141654  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.156399  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:03.642066  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.642159  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.657347  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.141755  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.141837  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.158471  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.641645  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.641754  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.655061  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.141603  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.141699  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.154832  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.641246  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.641321  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.658753  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.141224  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.141299  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.156055  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.641506  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.641593  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.654083  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.141490  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.141570  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.154699  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.641257  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.641336  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.653935  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.590423  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.088212  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:06.134727  128282 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:07.136828  128282 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.136854  128282 pod_ready.go:81] duration metric: took 3.023290043s waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.136866  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151525  128282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.151554  128282 pod_ready.go:81] duration metric: took 14.680003ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151570  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293823  128282 pod_ready.go:92] pod "kube-proxy-wjrjj" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.293853  128282 pod_ready.go:81] duration metric: took 142.276185ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293864  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690262  128282 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.690291  128282 pod_ready.go:81] duration metric: took 396.420266ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690311  128282 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:10.001790  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.141984  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.142065  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.154365  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:08.641957  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.642070  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.654449  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:09.117052  127760 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:18:09.117093  127760 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:18:09.117131  127760 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:18:09.117195  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:18:09.165861  127760 cri.go:89] found id: ""
	I1212 23:18:09.165944  127760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:18:09.183729  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:18:09.194407  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:18:09.194487  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204575  127760 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204609  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:09.333758  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.380332  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04653446s)
	I1212 23:18:10.380364  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.603185  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.692919  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.776099  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:18:10.776189  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.795881  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.310083  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.809948  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.309977  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.810420  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.089789  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.589345  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:14.002715  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:13.310509  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:13.336361  127760 api_server.go:72] duration metric: took 2.560264825s to wait for apiserver process to appear ...
	I1212 23:18:13.336391  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:18:13.336411  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.319120  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.319159  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.319177  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.400337  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.400373  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.900625  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.906178  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:17.906233  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.401353  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.407217  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:18.407262  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.901435  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.913756  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:18:18.922517  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:18:18.922545  127760 api_server.go:131] duration metric: took 5.586147801s to wait for apiserver health ...
	I1212 23:18:18.922556  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:18:18.922563  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:18:18.924845  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:18:15.088538  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:17.587744  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:16.503957  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.002214  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:18.926570  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:18:18.976384  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:18:19.009915  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:18:19.035935  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:18:19.035986  127760 system_pods.go:61] "coredns-5dd5756b68-bz6cz" [4f53d6a6-c877-4f76-8aca-06ee891d9652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:18:19.035996  127760 system_pods.go:61] "etcd-embed-certs-809120" [260387de-7507-4962-b2fd-90cd6b39cae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:18:19.036005  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [94ded414-9813-4d0e-8de4-8ad5f6c16a33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:18:19.036017  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [c6574dde-8281-4dd2-bacd-c0412f1f592c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:18:19.036028  127760 system_pods.go:61] "kube-proxy-h7zgl" [87ca2a99-1da7-4a50-b4c7-f160cddf9ff3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:18:19.036042  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [fc6d3a5c-4056-47f8-9156-f5d370ba1de6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:18:19.036053  127760 system_pods.go:61] "metrics-server-57f55c9bc5-mxsd2" [d519663c-7921-4fc9-8d0f-ecf6d3cdbd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:18:19.036071  127760 system_pods.go:61] "storage-provisioner" [900e5cb9-7d27-4446-b15d-21f67fa3b629] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:18:19.036081  127760 system_pods.go:74] duration metric: took 26.13268ms to wait for pod list to return data ...
	I1212 23:18:19.036093  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:18:19.045885  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:18:19.045930  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:18:19.045945  127760 node_conditions.go:105] duration metric: took 9.842707ms to run NodePressure ...
	I1212 23:18:19.045969  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:19.587096  127760 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593698  127760 kubeadm.go:787] kubelet initialised
	I1212 23:18:19.593722  127760 kubeadm.go:788] duration metric: took 6.595854ms waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593730  127760 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:19.602567  127760 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:21.623798  127760 pod_ready.go:102] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.590788  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:22.089448  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:24.090497  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:21.501964  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.502814  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:26.000629  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.124864  127760 pod_ready.go:92] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:23.124888  127760 pod_ready.go:81] duration metric: took 3.52228673s waiting for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:23.124898  127760 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:25.143967  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.146069  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.645645  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.645671  127760 pod_ready.go:81] duration metric: took 4.520766787s waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.645686  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652369  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.652392  127760 pod_ready.go:81] duration metric: took 6.700076ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652402  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587478  128156 pod_ready.go:92] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.587505  128156 pod_ready.go:81] duration metric: took 40.035726456s waiting for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587518  128156 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.596994  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.597015  128156 pod_ready.go:81] duration metric: took 9.490538ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.597027  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601904  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.601930  128156 pod_ready.go:81] duration metric: took 4.894855ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601942  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608643  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.608662  128156 pod_ready.go:81] duration metric: took 6.712079ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608673  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614595  128156 pod_ready.go:92] pod "kube-proxy-rqhmc" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.614624  128156 pod_ready.go:81] duration metric: took 5.945157ms waiting for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614632  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985244  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.985272  128156 pod_ready.go:81] duration metric: took 370.631498ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985282  128156 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.293707  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.293859  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:28.500792  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:31.002513  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.676207  127760 pod_ready.go:102] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:32.172306  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.172339  127760 pod_ready.go:81] duration metric: took 4.519929269s waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.172355  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178133  127760 pod_ready.go:92] pod "kube-proxy-h7zgl" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.178154  127760 pod_ready.go:81] duration metric: took 5.793304ms waiting for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178163  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184283  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.184305  127760 pod_ready.go:81] duration metric: took 6.134863ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184319  127760 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:31.792415  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.793837  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.499687  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:35.500853  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:34.448290  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.948646  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.296844  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.793406  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:40.501951  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.949791  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.448832  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.294594  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.295134  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.000673  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.000747  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.452098  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.947475  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.793152  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.793282  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.003229  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.499682  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.949034  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:50.449118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.455176  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.793896  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.293413  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.293611  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:51.502870  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.000866  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.002047  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.948058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.950946  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.791908  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.792808  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.500328  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.000549  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:59.449089  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.948622  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:00.793090  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.294337  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.002131  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.500315  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.948920  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.949566  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.792376  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.793999  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:08.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.500002  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.950271  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.450074  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.292457  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.294375  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.503977  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:15.000631  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.948486  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.951220  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.448916  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.792888  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:16.793429  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.293010  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.000916  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.499770  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.449088  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.949856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.293433  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.792996  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.506787  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.507411  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:26.001279  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.950269  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.952818  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.793527  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.294892  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.499823  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.500142  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.448303  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.449512  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.793364  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.293202  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.001883  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.500561  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:32.948419  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:34.948716  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:36.949202  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.293744  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:37.294070  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:38.001116  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:40.001502  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.449215  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:41.948577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.793176  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.292783  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.501401  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:45.003364  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:43.950039  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.449043  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:44.792361  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.793184  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.294980  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:47.500147  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.501096  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:48.449912  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:50.950549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:51.794547  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.298465  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.000382  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.005736  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.950635  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:55.449330  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:57.449700  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.792615  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.499865  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:58.499980  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:00.500389  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.950151  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:02.447970  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:01.793306  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.793698  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.001300  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.499370  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:04.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:06.450549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.793804  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.793899  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.500520  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.000481  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:08.950058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:11.449345  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.293157  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.293642  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.500064  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.500937  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:13.949163  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:16.448489  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.793066  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.293467  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.293785  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.003921  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.501044  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:18.953218  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.449082  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.792447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.794479  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.999979  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:24.001269  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.001308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.948517  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:25.949879  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.292488  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.293405  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.499717  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.500472  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.448633  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.455346  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.293436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.296063  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:33.004484  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:35.500190  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.949307  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.949549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.447994  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.792727  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.292297  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.293185  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.501094  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:40.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.448914  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.449574  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.296498  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.794079  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:42.000667  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:44.500084  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.949370  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.448365  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.293571  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.795374  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.501287  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:49.000247  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.002102  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.449326  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:50.950049  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.295712  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.796436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.500278  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.500483  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:52.950509  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.448194  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:57.448444  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:56.293432  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.791909  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.000148  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.000718  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:59.448627  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:01.449178  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.793652  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.798916  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.501103  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:04.504053  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:03.948376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.949118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.293868  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.796468  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.000140  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:09.500040  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.949917  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.449692  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.296954  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.793159  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:11.500724  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:13.501811  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:16.000506  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.948932  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:14.951174  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.448985  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:15.294394  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.792822  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:18.501242  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.000679  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:19.449857  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.949137  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:20.293991  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:22.793476  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.501237  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.001069  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.950208  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.449036  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:25.294562  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:27.792099  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.500763  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.000635  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.947918  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:30.949180  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:29.793559  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.793709  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:34.292407  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:33.001948  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.002761  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:32.949352  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.448233  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.449470  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:36.292723  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:38.792944  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.501308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.001944  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:39.948613  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:41.953252  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.793938  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.796054  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.499956  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.504598  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.453963  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.952856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:45.292988  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:47.792829  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.999714  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.000749  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.000798  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.448592  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.461405  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.793084  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:52.293550  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.001475  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:55.499894  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.952376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.451000  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:54.793373  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.796557  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:59.293830  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:57.501136  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.000501  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:58.949246  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.949331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:01.792604  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.793283  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:02.501611  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.001210  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.449006  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.449356  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:06.291970  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:08.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.502381  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.690392  128282 pod_ready.go:81] duration metric: took 4m0.000056495s waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:07.690437  128282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:07.690447  128282 pod_ready.go:38] duration metric: took 4m3.599656754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:07.690468  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:22:07.690503  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:07.690560  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:07.752216  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:07.752249  128282 cri.go:89] found id: ""
	I1212 23:22:07.752258  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:07.752309  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.757000  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:07.757068  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:07.801367  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:07.801398  128282 cri.go:89] found id: ""
	I1212 23:22:07.801409  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:07.801470  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.806744  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:07.806804  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:07.850495  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:07.850530  128282 cri.go:89] found id: ""
	I1212 23:22:07.850538  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:07.850588  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.855144  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:07.855226  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:07.900092  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:07.900121  128282 cri.go:89] found id: ""
	I1212 23:22:07.900131  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:07.900199  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.904280  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:07.904357  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:07.945991  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:07.946019  128282 cri.go:89] found id: ""
	I1212 23:22:07.946034  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:07.946101  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.951095  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:07.951168  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:07.992586  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:07.992611  128282 cri.go:89] found id: ""
	I1212 23:22:07.992619  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:07.992667  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.996887  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:07.996945  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:08.038769  128282 cri.go:89] found id: ""
	I1212 23:22:08.038810  128282 logs.go:284] 0 containers: []
	W1212 23:22:08.038820  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:08.038829  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:08.038892  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:08.081167  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.081202  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.081209  128282 cri.go:89] found id: ""
	I1212 23:22:08.081225  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:08.081282  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.085740  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.089816  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:08.089836  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:08.137243  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:08.137274  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:08.180654  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:08.180686  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:08.240646  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:08.240684  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:08.289713  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:08.289753  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:08.440863  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:08.440902  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:08.505477  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:08.505516  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.561373  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:08.561411  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:08.626446  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:08.626482  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:08.681726  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:08.681769  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:08.703440  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:08.703468  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.739960  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:08.739998  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:09.213821  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:09.213867  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:07.949577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:09.950086  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.449579  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:10.793412  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.794447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:11.771447  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:22:11.787326  128282 api_server.go:72] duration metric: took 4m15.571529815s to wait for apiserver process to appear ...
	I1212 23:22:11.787355  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:22:11.787395  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:11.787459  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:11.841146  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:11.841178  128282 cri.go:89] found id: ""
	I1212 23:22:11.841199  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:11.841263  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.845844  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:11.845917  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:11.895757  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:11.895780  128282 cri.go:89] found id: ""
	I1212 23:22:11.895789  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:11.895846  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.900575  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:11.900641  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:11.941848  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:11.941872  128282 cri.go:89] found id: ""
	I1212 23:22:11.941882  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:11.941962  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.948119  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:11.948192  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:11.997102  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:11.997126  128282 cri.go:89] found id: ""
	I1212 23:22:11.997135  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:11.997189  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.002683  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:12.002750  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:12.042120  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:12.042144  128282 cri.go:89] found id: ""
	I1212 23:22:12.042159  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:12.042225  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.047068  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:12.047144  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:12.092055  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:12.092078  128282 cri.go:89] found id: ""
	I1212 23:22:12.092087  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:12.092137  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.097642  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:12.097713  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:12.137481  128282 cri.go:89] found id: ""
	I1212 23:22:12.137521  128282 logs.go:284] 0 containers: []
	W1212 23:22:12.137532  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:12.137542  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:12.137607  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:12.183712  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:12.183735  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.183740  128282 cri.go:89] found id: ""
	I1212 23:22:12.183747  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:12.183813  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.188656  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.193613  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:12.193639  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:12.206911  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:12.206941  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:12.258294  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:12.258335  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.300901  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:12.300934  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:12.765702  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:12.765746  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:12.909101  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:12.909138  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:12.967049  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:12.967083  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:13.010895  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:13.010930  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:13.062291  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:13.062324  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:13.107276  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:13.107320  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:13.166395  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:13.166448  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:13.212812  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:13.212853  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:13.260977  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:13.261022  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:15.816287  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:22:15.821554  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:22:15.822925  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:22:15.822945  128282 api_server.go:131] duration metric: took 4.035583432s to wait for apiserver health ...
	I1212 23:22:15.822954  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:22:15.822976  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:15.823024  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:15.870940  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:15.870981  128282 cri.go:89] found id: ""
	I1212 23:22:15.870993  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:15.871062  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.876167  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:15.876244  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:15.916642  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:15.916671  128282 cri.go:89] found id: ""
	I1212 23:22:15.916682  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:15.916747  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.921173  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:15.921238  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:15.963421  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:15.963449  128282 cri.go:89] found id: ""
	I1212 23:22:15.963461  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:15.963521  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.967747  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:15.967821  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:14.949925  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.949999  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:15.294181  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:17.793324  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.011046  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.011071  128282 cri.go:89] found id: ""
	I1212 23:22:16.011079  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:16.011128  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.015592  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:16.015659  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:16.058065  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:16.058092  128282 cri.go:89] found id: ""
	I1212 23:22:16.058103  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:16.058157  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.062334  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:16.062398  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:16.105032  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:16.105062  128282 cri.go:89] found id: ""
	I1212 23:22:16.105074  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:16.105140  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.109674  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:16.109728  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:16.151188  128282 cri.go:89] found id: ""
	I1212 23:22:16.151221  128282 logs.go:284] 0 containers: []
	W1212 23:22:16.151230  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:16.151246  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:16.151314  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:16.196149  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:16.196191  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.196199  128282 cri.go:89] found id: ""
	I1212 23:22:16.196209  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:16.196272  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.201690  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.205939  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:16.205970  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:16.358186  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:16.358236  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:16.404737  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:16.404780  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.449040  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:16.449069  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.491141  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:16.491173  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:16.860522  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:16.860578  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:16.877982  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:16.878030  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:16.923301  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:16.923338  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:16.965351  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:16.965382  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:17.024559  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:17.024603  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:17.079193  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:17.079229  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:17.123956  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:17.124003  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:17.202000  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:17.202043  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:19.755866  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:22:19.755901  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.755907  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.755914  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.755922  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.755929  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.755936  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.755946  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.755954  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.755963  128282 system_pods.go:74] duration metric: took 3.933003633s to wait for pod list to return data ...
	I1212 23:22:19.755977  128282 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:22:19.758618  128282 default_sa.go:45] found service account: "default"
	I1212 23:22:19.758639  128282 default_sa.go:55] duration metric: took 2.655294ms for default service account to be created ...
	I1212 23:22:19.758647  128282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:22:19.764376  128282 system_pods.go:86] 8 kube-system pods found
	I1212 23:22:19.764398  128282 system_pods.go:89] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.764404  128282 system_pods.go:89] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.764409  128282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.764414  128282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.764418  128282 system_pods.go:89] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.764432  128282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.764444  128282 system_pods.go:89] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.764454  128282 system_pods.go:89] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.764464  128282 system_pods.go:126] duration metric: took 5.811076ms to wait for k8s-apps to be running ...
	I1212 23:22:19.764475  128282 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:22:19.764531  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:19.781048  128282 system_svc.go:56] duration metric: took 16.561836ms WaitForService to wait for kubelet.
	I1212 23:22:19.781100  128282 kubeadm.go:581] duration metric: took 4m23.565309829s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:22:19.781129  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:22:19.784205  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:22:19.784229  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:22:19.784240  128282 node_conditions.go:105] duration metric: took 3.105926ms to run NodePressure ...
	I1212 23:22:19.784253  128282 start.go:228] waiting for startup goroutines ...
	I1212 23:22:19.784259  128282 start.go:233] waiting for cluster config update ...
	I1212 23:22:19.784269  128282 start.go:242] writing updated cluster config ...
	I1212 23:22:19.784545  128282 ssh_runner.go:195] Run: rm -f paused
	I1212 23:22:19.840938  128282 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:22:19.842885  128282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850839" cluster and "default" namespace by default
	I1212 23:22:19.449331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:21.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:20.294156  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:22.792746  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:23.949834  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:26.452555  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.793601  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.985518  128156 pod_ready.go:81] duration metric: took 4m0.000203674s waiting for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:24.985551  128156 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:24.985571  128156 pod_ready.go:38] duration metric: took 4m40.456239368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:24.985600  128156 kubeadm.go:640] restartCluster took 5m2.616770336s
	W1212 23:22:24.985660  128156 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:24.985690  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:28.949293  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:31.449689  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:32.184476  127760 pod_ready.go:81] duration metric: took 4m0.000136331s waiting for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:32.184516  127760 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:32.184559  127760 pod_ready.go:38] duration metric: took 4m12.59080567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:32.184598  127760 kubeadm.go:640] restartCluster took 4m33.093698567s
	W1212 23:22:32.184674  127760 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:32.184715  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:39.117782  128156 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.132057077s)
	I1212 23:22:39.117868  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:39.132912  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:39.143453  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:39.153628  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:39.153684  128156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:39.374201  128156 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:46.310264  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.12551082s)
	I1212 23:22:46.310350  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:46.327577  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:46.339177  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:46.350355  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:46.350407  127760 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:46.414859  127760 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:22:46.414971  127760 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:46.599881  127760 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:46.600039  127760 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:46.600208  127760 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:46.867542  127760 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:46.869398  127760 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:46.869528  127760 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:46.869659  127760 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:46.869770  127760 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:46.869933  127760 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:46.870496  127760 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:46.871021  127760 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:46.871802  127760 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:46.873187  127760 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:46.874737  127760 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:46.876316  127760 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:46.877713  127760 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:46.877769  127760 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:47.211156  127760 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:47.370652  127760 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:47.491927  127760 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:47.746007  127760 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:47.746996  127760 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:47.749868  127760 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:47.751553  127760 out.go:204]   - Booting up control plane ...
	I1212 23:22:47.751724  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:47.751814  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:47.752662  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:47.770296  127760 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:47.770438  127760 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:47.770546  127760 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.362262  128156 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 23:22:51.362341  128156 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:51.362461  128156 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:51.362593  128156 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:51.362706  128156 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:51.362781  128156 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:51.364439  128156 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:51.364561  128156 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:51.364660  128156 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:51.364758  128156 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:51.364840  128156 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:51.364971  128156 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:51.365060  128156 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:51.365137  128156 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:51.365215  128156 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:51.365320  128156 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:51.365425  128156 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:51.365479  128156 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:51.365553  128156 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:51.365626  128156 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:51.365706  128156 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 23:22:51.365778  128156 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:51.365859  128156 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:51.365936  128156 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:51.366046  128156 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:51.366131  128156 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:51.368190  128156 out.go:204]   - Booting up control plane ...
	I1212 23:22:51.368316  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:51.368421  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:51.368517  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:51.368649  128156 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:51.368763  128156 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:51.368813  128156 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.369013  128156 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.369107  128156 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503652 seconds
	I1212 23:22:51.369231  128156 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:51.369390  128156 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:51.369465  128156 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:51.369709  128156 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-115023 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:51.369780  128156 kubeadm.go:322] [bootstrap-token] Using token: agyzoj.wkr94b17dt19k7yx
	I1212 23:22:51.371110  128156 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:51.371306  128156 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:51.371421  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:51.371643  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:51.371825  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:51.371975  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:51.372085  128156 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:51.372226  128156 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:51.372285  128156 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:51.372344  128156 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:51.372353  128156 kubeadm.go:322] 
	I1212 23:22:51.372425  128156 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:51.372437  128156 kubeadm.go:322] 
	I1212 23:22:51.372529  128156 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:51.372540  128156 kubeadm.go:322] 
	I1212 23:22:51.372571  128156 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:51.372645  128156 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:51.372711  128156 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:51.372720  128156 kubeadm.go:322] 
	I1212 23:22:51.372793  128156 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:51.372804  128156 kubeadm.go:322] 
	I1212 23:22:51.372861  128156 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:51.372871  128156 kubeadm.go:322] 
	I1212 23:22:51.372933  128156 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:51.373050  128156 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:51.373137  128156 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:51.373149  128156 kubeadm.go:322] 
	I1212 23:22:51.373248  128156 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:51.373345  128156 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:51.373356  128156 kubeadm.go:322] 
	I1212 23:22:51.373456  128156 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373583  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:51.373613  128156 kubeadm.go:322] 	--control-plane 
	I1212 23:22:51.373623  128156 kubeadm.go:322] 
	I1212 23:22:51.373724  128156 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:51.373739  128156 kubeadm.go:322] 
	I1212 23:22:51.373842  128156 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373985  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:51.374006  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:22:51.374015  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:51.375563  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:47.945457  127760 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.376861  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:51.414215  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:51.484549  128156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:51.484635  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.484696  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=no-preload-115023 minikube.k8s.io/updated_at=2023_12_12T23_22_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.564599  128156 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:51.924093  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.026923  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.628483  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.128275  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.628006  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:54.127897  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.450625  127760 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504757 seconds
	I1212 23:22:56.450779  127760 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:56.468441  127760 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:57.003074  127760 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:57.003292  127760 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-809120 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:57.518097  127760 kubeadm.go:322] [bootstrap-token] Using token: ichlu8.wzw1wbhrbc06xbtw
	I1212 23:22:57.519536  127760 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:57.519639  127760 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:57.528652  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:57.538325  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:57.542226  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:57.551395  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:57.556988  127760 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:57.573462  127760 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:57.833933  127760 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:57.949764  127760 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:57.949788  127760 kubeadm.go:322] 
	I1212 23:22:57.949888  127760 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:57.949913  127760 kubeadm.go:322] 
	I1212 23:22:57.950013  127760 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:57.950036  127760 kubeadm.go:322] 
	I1212 23:22:57.950079  127760 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:57.950155  127760 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:57.950228  127760 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:57.950240  127760 kubeadm.go:322] 
	I1212 23:22:57.950301  127760 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:57.950311  127760 kubeadm.go:322] 
	I1212 23:22:57.950375  127760 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:57.950385  127760 kubeadm.go:322] 
	I1212 23:22:57.950468  127760 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:57.950578  127760 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:57.950678  127760 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:57.950702  127760 kubeadm.go:322] 
	I1212 23:22:57.950818  127760 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:57.950916  127760 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:57.950926  127760 kubeadm.go:322] 
	I1212 23:22:57.951054  127760 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951199  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:57.951231  127760 kubeadm.go:322] 	--control-plane 
	I1212 23:22:57.951266  127760 kubeadm.go:322] 
	I1212 23:22:57.951386  127760 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:57.951396  127760 kubeadm.go:322] 
	I1212 23:22:57.951494  127760 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951619  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:57.952303  127760 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:57.952326  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:22:57.952337  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:57.954692  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:54.628965  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.127922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.627980  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.128047  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.628471  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.128456  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.628284  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.128528  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.628480  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.128296  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.955898  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:57.975567  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:58.044612  127760 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:58.044741  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.044746  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=embed-certs-809120 minikube.k8s.io/updated_at=2023_12_12T23_22_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.158788  127760 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:58.375305  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.487117  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.075465  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.575132  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.075781  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.575754  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.075376  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.575524  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.075163  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.574821  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.628475  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.128509  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.628837  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.128959  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.627976  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.128077  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.628493  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.128203  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.628549  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.127987  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.627922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.756882  128156 kubeadm.go:1088] duration metric: took 13.272316322s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:04.756928  128156 kubeadm.go:406] StartCluster complete in 5m42.440524658s
	I1212 23:23:04.756955  128156 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.757069  128156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:04.759734  128156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.760081  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:04.760220  128156 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:04.760311  128156 addons.go:69] Setting storage-provisioner=true in profile "no-preload-115023"
	I1212 23:23:04.760325  128156 addons.go:69] Setting default-storageclass=true in profile "no-preload-115023"
	I1212 23:23:04.760358  128156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-115023"
	I1212 23:23:04.760385  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:23:04.760332  128156 addons.go:231] Setting addon storage-provisioner=true in "no-preload-115023"
	W1212 23:23:04.760426  128156 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:04.760497  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760337  128156 addons.go:69] Setting metrics-server=true in profile "no-preload-115023"
	I1212 23:23:04.760525  128156 addons.go:231] Setting addon metrics-server=true in "no-preload-115023"
	W1212 23:23:04.760538  128156 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:04.760577  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760759  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760787  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.760953  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760986  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760995  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.761010  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.777848  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I1212 23:23:04.778063  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1212 23:23:04.778315  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778479  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778613  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38509
	I1212 23:23:04.778931  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778945  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778952  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.778957  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779020  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.779302  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779561  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.779726  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.779749  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779929  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.779961  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.780516  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.781173  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.781207  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.783399  128156 addons.go:231] Setting addon default-storageclass=true in "no-preload-115023"
	W1212 23:23:04.783422  128156 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:04.783452  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.783871  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.783906  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.797493  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 23:23:04.797741  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I1212 23:23:04.798102  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798132  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798613  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798630  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.798956  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798985  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.799262  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799438  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.799639  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.801934  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.802007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.803861  128156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:04.802341  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I1212 23:23:04.806911  128156 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:04.805759  128156 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:04.806058  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.808825  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:04.808833  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:04.808848  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:04.808856  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.808863  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.809266  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.809281  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.809624  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.810352  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.810381  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.813139  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813629  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.813654  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813828  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813882  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814303  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.814333  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.814148  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814542  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814625  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.814797  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814855  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.814954  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.815127  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.823127  128156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-115023" context rescaled to 1 replicas
	I1212 23:23:04.823174  128156 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:04.824991  128156 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:04.826596  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:04.827821  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I1212 23:23:04.828256  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.828820  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.828845  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.829390  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.829741  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.834167  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.834521  128156 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:04.834539  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:04.834563  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.838055  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838555  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.838587  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838772  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.838964  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.839119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.839284  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.972964  128156 node_ready.go:35] waiting up to 6m0s for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.973014  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:04.998182  128156 node_ready.go:49] node "no-preload-115023" has status "Ready":"True"
	I1212 23:23:04.998214  128156 node_ready.go:38] duration metric: took 25.214785ms waiting for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.998226  128156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:05.012036  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:05.027954  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:05.027977  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:05.063451  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:05.076403  128156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:05.119924  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:05.119957  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:05.216413  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.216443  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:05.285434  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.817542  128156 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:06.316381  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.252894593s)
	I1212 23:23:06.316378  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.304291472s)
	I1212 23:23:06.316446  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316460  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316491  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316509  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316903  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.316959  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.316966  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.316986  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316916  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317010  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.317022  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316995  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317032  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317327  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.317387  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317408  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.318858  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.318881  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.366104  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.366135  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.366427  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.366481  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.366492  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618093  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332604197s)
	I1212 23:23:06.618161  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618183  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618643  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.618665  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618676  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618684  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618845  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620326  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620340  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.620363  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.620384  128156 addons.go:467] Verifying addon metrics-server=true in "no-preload-115023"
	I1212 23:23:06.622226  128156 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:03.075069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.575772  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.074921  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.575481  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.075785  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.575855  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.075276  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.575017  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.075100  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.575342  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.623716  128156 addons.go:502] enable addons completed in 1.863496659s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:07.165490  128156 pod_ready.go:102] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:08.161341  128156 pod_ready.go:92] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.161380  128156 pod_ready.go:81] duration metric: took 3.084948492s waiting for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.161395  128156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169259  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.169294  128156 pod_ready.go:81] duration metric: took 7.890109ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169309  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176068  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.176097  128156 pod_ready.go:81] duration metric: took 6.779109ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176111  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183056  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.183085  128156 pod_ready.go:81] duration metric: took 6.964809ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183099  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066100  128156 pod_ready.go:92] pod "kube-proxy-qs95k" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.066123  128156 pod_ready.go:81] duration metric: took 883.017234ms waiting for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066132  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357841  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.357874  128156 pod_ready.go:81] duration metric: took 291.734639ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357884  128156 pod_ready.go:38] duration metric: took 4.359648281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:09.357904  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:09.357970  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:09.372791  128156 api_server.go:72] duration metric: took 4.549577037s to wait for apiserver process to appear ...
	I1212 23:23:09.372820  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:09.372841  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:23:09.378375  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:23:09.379855  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:23:09.379882  128156 api_server.go:131] duration metric: took 7.054126ms to wait for apiserver health ...
	I1212 23:23:09.379893  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:09.561188  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:09.561216  128156 system_pods.go:61] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.561221  128156 system_pods.go:61] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.561225  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.561229  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.561235  128156 system_pods.go:61] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.561239  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.561245  128156 system_pods.go:61] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.561249  128156 system_pods.go:61] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.561257  128156 system_pods.go:74] duration metric: took 181.358443ms to wait for pod list to return data ...
	I1212 23:23:09.561265  128156 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:09.756864  128156 default_sa.go:45] found service account: "default"
	I1212 23:23:09.756894  128156 default_sa.go:55] duration metric: took 195.622122ms for default service account to be created ...
	I1212 23:23:09.756905  128156 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:09.960670  128156 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:09.960700  128156 system_pods.go:89] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.960705  128156 system_pods.go:89] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.960710  128156 system_pods.go:89] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.960715  128156 system_pods.go:89] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.960719  128156 system_pods.go:89] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.960723  128156 system_pods.go:89] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.960729  128156 system_pods.go:89] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.960735  128156 system_pods.go:89] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.960744  128156 system_pods.go:126] duration metric: took 203.831934ms to wait for k8s-apps to be running ...
	I1212 23:23:09.960754  128156 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:09.960805  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:09.974511  128156 system_svc.go:56] duration metric: took 13.742619ms WaitForService to wait for kubelet.
	I1212 23:23:09.974543  128156 kubeadm.go:581] duration metric: took 5.15133848s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:09.974571  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:10.158679  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:10.158708  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:10.158717  128156 node_conditions.go:105] duration metric: took 184.140544ms to run NodePressure ...
	I1212 23:23:10.158730  128156 start.go:228] waiting for startup goroutines ...
	I1212 23:23:10.158736  128156 start.go:233] waiting for cluster config update ...
	I1212 23:23:10.158746  128156 start.go:242] writing updated cluster config ...
	I1212 23:23:10.158996  128156 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:10.222646  128156 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 23:23:10.224867  128156 out.go:177] * Done! kubectl is now configured to use "no-preload-115023" cluster and "default" namespace by default
	I1212 23:23:08.075026  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:08.574992  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.075693  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.575069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.075713  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.575464  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.075090  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.250257  127760 kubeadm.go:1088] duration metric: took 13.205579442s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:11.250290  127760 kubeadm.go:406] StartCluster complete in 5m12.212668558s
	I1212 23:23:11.250312  127760 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.250409  127760 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:11.253977  127760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.254241  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:11.254250  127760 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:11.254337  127760 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-809120"
	I1212 23:23:11.254351  127760 addons.go:69] Setting default-storageclass=true in profile "embed-certs-809120"
	I1212 23:23:11.254358  127760 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-809120"
	W1212 23:23:11.254366  127760 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:11.254369  127760 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-809120"
	I1212 23:23:11.254422  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254431  127760 addons.go:69] Setting metrics-server=true in profile "embed-certs-809120"
	I1212 23:23:11.254457  127760 addons.go:231] Setting addon metrics-server=true in "embed-certs-809120"
	W1212 23:23:11.254466  127760 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:11.254466  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:23:11.254510  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254798  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254802  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254845  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.254902  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254933  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.255058  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.272689  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I1212 23:23:11.272926  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I1212 23:23:11.273095  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273297  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273444  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I1212 23:23:11.273710  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273722  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.273784  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273935  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273947  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274917  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.274942  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.275403  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.275452  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.275615  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.275776  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.276164  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.276199  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.279953  127760 addons.go:231] Setting addon default-storageclass=true in "embed-certs-809120"
	W1212 23:23:11.279984  127760 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:11.280016  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.280439  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.280488  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.296262  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I1212 23:23:11.296273  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I1212 23:23:11.296731  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.296839  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.297284  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297296  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297304  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297315  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297662  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297722  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297820  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.297867  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1212 23:23:11.297876  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.298202  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.298805  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.298823  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.299106  127760 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-809120" context rescaled to 1 replicas
	I1212 23:23:11.299151  127760 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:11.300876  127760 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:11.299808  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.299838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.299990  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.302374  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:11.303907  127760 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:11.305369  127760 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:11.302872  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.307972  127760 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.307992  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:11.308012  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306693  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:11.308064  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:11.308088  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306729  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.312550  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312826  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.312853  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313337  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.313477  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.313493  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313524  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.313558  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313610  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.313772  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313988  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.314165  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.314287  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.334457  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1212 23:23:11.335025  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.335687  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.335719  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.336130  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.336356  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.338062  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.338356  127760 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.338380  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:11.338407  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.341489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342079  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.342119  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342283  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.342499  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.342642  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.342823  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.562179  127760 node_ready.go:35] waiting up to 6m0s for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.562383  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:11.573888  127760 node_ready.go:49] node "embed-certs-809120" has status "Ready":"True"
	I1212 23:23:11.573909  127760 node_ready.go:38] duration metric: took 11.694074ms waiting for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.573919  127760 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:11.591310  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:11.634553  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.672164  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.681199  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:11.681232  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:11.910291  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:11.910325  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:11.993110  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:11.993135  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:12.043047  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:13.550517  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.988091372s)
	I1212 23:23:13.550558  127760 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:13.642966  127760 pod_ready.go:102] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:14.387226  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.752630931s)
	I1212 23:23:14.387298  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387315  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387321  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.715126034s)
	I1212 23:23:14.387345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387359  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387641  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387663  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387675  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387690  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387776  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387801  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387811  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387819  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.388233  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388247  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388248  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.388285  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388291  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388345  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.426683  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.426713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.427017  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.427030  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.427038  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.477873  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.434777303s)
	I1212 23:23:14.477930  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.477944  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478303  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478321  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.478333  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.478357  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478607  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478622  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478632  127760 addons.go:467] Verifying addon metrics-server=true in "embed-certs-809120"
	I1212 23:23:14.480500  127760 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:14.481900  127760 addons.go:502] enable addons completed in 3.227656537s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:15.629572  127760 pod_ready.go:92] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.629599  127760 pod_ready.go:81] duration metric: took 4.038262674s waiting for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.629608  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.638502  127760 pod_ready.go:97] error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638532  127760 pod_ready.go:81] duration metric: took 8.918039ms waiting for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	E1212 23:23:15.638547  127760 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638556  127760 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647047  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.647075  127760 pod_ready.go:81] duration metric: took 8.510672ms waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647089  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655068  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.655091  127760 pod_ready.go:81] duration metric: took 7.994932ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655100  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664338  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.664386  127760 pod_ready.go:81] duration metric: took 9.26869ms waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664401  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732454  127760 pod_ready.go:92] pod "kube-proxy-4nb6w" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:16.732480  127760 pod_ready.go:81] duration metric: took 1.068071012s waiting for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732489  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022376  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:17.022402  127760 pod_ready.go:81] duration metric: took 289.906446ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022423  127760 pod_ready.go:38] duration metric: took 5.448491831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:17.022445  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:17.022494  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:17.039594  127760 api_server.go:72] duration metric: took 5.740406855s to wait for apiserver process to appear ...
	I1212 23:23:17.039620  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:17.039637  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:23:17.044745  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:23:17.046494  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:23:17.046521  127760 api_server.go:131] duration metric: took 6.894306ms to wait for apiserver health ...
	I1212 23:23:17.046531  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:17.227869  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:17.227899  127760 system_pods.go:61] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.227904  127760 system_pods.go:61] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.227909  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.227913  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.227916  127760 system_pods.go:61] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.227920  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.227927  127760 system_pods.go:61] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.227933  127760 system_pods.go:61] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.227944  127760 system_pods.go:74] duration metric: took 181.405975ms to wait for pod list to return data ...
	I1212 23:23:17.227962  127760 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:17.423151  127760 default_sa.go:45] found service account: "default"
	I1212 23:23:17.423181  127760 default_sa.go:55] duration metric: took 195.20215ms for default service account to be created ...
	I1212 23:23:17.423190  127760 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:17.627077  127760 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:17.627104  127760 system_pods.go:89] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.627109  127760 system_pods.go:89] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.627114  127760 system_pods.go:89] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.627118  127760 system_pods.go:89] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.627124  127760 system_pods.go:89] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.627128  127760 system_pods.go:89] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.627135  127760 system_pods.go:89] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.627139  127760 system_pods.go:89] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.627147  127760 system_pods.go:126] duration metric: took 203.952951ms to wait for k8s-apps to be running ...
	I1212 23:23:17.627155  127760 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:17.627197  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:17.641949  127760 system_svc.go:56] duration metric: took 14.784378ms WaitForService to wait for kubelet.
	I1212 23:23:17.641979  127760 kubeadm.go:581] duration metric: took 6.342797652s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:17.642005  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:17.823169  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:17.823201  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:17.823214  127760 node_conditions.go:105] duration metric: took 181.202017ms to run NodePressure ...
	I1212 23:23:17.823230  127760 start.go:228] waiting for startup goroutines ...
	I1212 23:23:17.823258  127760 start.go:233] waiting for cluster config update ...
	I1212 23:23:17.823276  127760 start.go:242] writing updated cluster config ...
	I1212 23:23:17.823609  127760 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:17.879192  127760 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:23:17.880946  127760 out.go:177] * Done! kubectl is now configured to use "embed-certs-809120" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:16:53 UTC, ends at Tue 2023-12-12 23:32:12 UTC. --
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.038015156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5012366e-353b-4e56-a2a1-7867596a5c68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.038088469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5012366e-353b-4e56-a2a1-7867596a5c68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.038307869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,PodSandboxId:22a91e989de5c4022a7b7721bc3ab594fc6b43e2bf96b3b27edc318aea794cc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702423388123429548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1865df-d2a5-4ebe-be00-20aa7a752e65,},Annotations:map[string]string{io.kubernetes.container.hash: 53d728c6,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,PodSandboxId:0266489be870b25457febff54e2260a4c81168dd35146c8ded24853d0f2533fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702423387396138674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs95k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d936172-0411-4163-a62a-25a11d4ac2f4,},Annotations:map[string]string{io.kubernetes.container.hash: a1ebe0fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,PodSandboxId:c8703f6b4f020b34740b44655217ff262b8d74ab04ae39807b00d2d246486367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702423387280823674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9wxzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1b5bb4-619d-48a2-9c81-060018616240,},Annotations:map[string]string{io.kubernetes.container.hash: 79b5fb15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,PodSandboxId:13f74c4eeaf43037844e41d1da5bf148c2eab6880ca02a95bfcce8ab6f42421a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702423363380053432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
24e7d66089090d7e8a595d9f335e4709,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,PodSandboxId:d5f3d42af47d3379363e503eb36321c029263c6a6fe40ca5d79f74e2c25397f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702423362983025990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-115023,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 8ff2739708a59d44f5a39a50cec77f81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,PodSandboxId:019ea648a9b148bfda6208ff7a823739cf126da6d261222fd54af048fc39360a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702423362959623548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 7e1cbd99625f6216cc9339126276ebbf,},Annotations:map[string]string{io.kubernetes.container.hash: ca6bf390,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59,PodSandboxId:3bf80270d3dd5520966e67fdf017d2bc63b2f7c2a8716164d1e56c4005450151,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702423362653094206,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a45eb8b17b2b296077532d29757644,},A
nnotations:map[string]string{io.kubernetes.container.hash: fe815856,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5012366e-353b-4e56-a2a1-7867596a5c68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.073952940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=93c830f3-d96c-4a3e-ae30-5e1badfd2013 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.074051854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=93c830f3-d96c-4a3e-ae30-5e1badfd2013 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.075137719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=74f36817-f24f-4a98-80ce-bd480dcbdf72 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.075525086Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423932075509528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=74f36817-f24f-4a98-80ce-bd480dcbdf72 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.075974870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f0ed2c1c-e7cf-4d6b-9661-ded63fbe17c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.076050377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f0ed2c1c-e7cf-4d6b-9661-ded63fbe17c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.076266749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,PodSandboxId:22a91e989de5c4022a7b7721bc3ab594fc6b43e2bf96b3b27edc318aea794cc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702423388123429548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1865df-d2a5-4ebe-be00-20aa7a752e65,},Annotations:map[string]string{io.kubernetes.container.hash: 53d728c6,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,PodSandboxId:0266489be870b25457febff54e2260a4c81168dd35146c8ded24853d0f2533fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702423387396138674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs95k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d936172-0411-4163-a62a-25a11d4ac2f4,},Annotations:map[string]string{io.kubernetes.container.hash: a1ebe0fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,PodSandboxId:c8703f6b4f020b34740b44655217ff262b8d74ab04ae39807b00d2d246486367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702423387280823674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9wxzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1b5bb4-619d-48a2-9c81-060018616240,},Annotations:map[string]string{io.kubernetes.container.hash: 79b5fb15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,PodSandboxId:13f74c4eeaf43037844e41d1da5bf148c2eab6880ca02a95bfcce8ab6f42421a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702423363380053432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
24e7d66089090d7e8a595d9f335e4709,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,PodSandboxId:d5f3d42af47d3379363e503eb36321c029263c6a6fe40ca5d79f74e2c25397f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702423362983025990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-115023,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 8ff2739708a59d44f5a39a50cec77f81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,PodSandboxId:019ea648a9b148bfda6208ff7a823739cf126da6d261222fd54af048fc39360a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702423362959623548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 7e1cbd99625f6216cc9339126276ebbf,},Annotations:map[string]string{io.kubernetes.container.hash: ca6bf390,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59,PodSandboxId:3bf80270d3dd5520966e67fdf017d2bc63b2f7c2a8716164d1e56c4005450151,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702423362653094206,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a45eb8b17b2b296077532d29757644,},A
nnotations:map[string]string{io.kubernetes.container.hash: fe815856,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f0ed2c1c-e7cf-4d6b-9661-ded63fbe17c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.098558628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7a0526a6-062b-47c9-90a2-a17b73df3857 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.098697345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7a0526a6-062b-47c9-90a2-a17b73df3857 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.098879350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,PodSandboxId:22a91e989de5c4022a7b7721bc3ab594fc6b43e2bf96b3b27edc318aea794cc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702423388123429548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1865df-d2a5-4ebe-be00-20aa7a752e65,},Annotations:map[string]string{io.kubernetes.container.hash: 53d728c6,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,PodSandboxId:0266489be870b25457febff54e2260a4c81168dd35146c8ded24853d0f2533fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702423387396138674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs95k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d936172-0411-4163-a62a-25a11d4ac2f4,},Annotations:map[string]string{io.kubernetes.container.hash: a1ebe0fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,PodSandboxId:c8703f6b4f020b34740b44655217ff262b8d74ab04ae39807b00d2d246486367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702423387280823674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9wxzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1b5bb4-619d-48a2-9c81-060018616240,},Annotations:map[string]string{io.kubernetes.container.hash: 79b5fb15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,PodSandboxId:13f74c4eeaf43037844e41d1da5bf148c2eab6880ca02a95bfcce8ab6f42421a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702423363380053432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
24e7d66089090d7e8a595d9f335e4709,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,PodSandboxId:d5f3d42af47d3379363e503eb36321c029263c6a6fe40ca5d79f74e2c25397f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702423362983025990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-115023,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 8ff2739708a59d44f5a39a50cec77f81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,PodSandboxId:019ea648a9b148bfda6208ff7a823739cf126da6d261222fd54af048fc39360a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702423362959623548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 7e1cbd99625f6216cc9339126276ebbf,},Annotations:map[string]string{io.kubernetes.container.hash: ca6bf390,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59,PodSandboxId:3bf80270d3dd5520966e67fdf017d2bc63b2f7c2a8716164d1e56c4005450151,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702423362653094206,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a45eb8b17b2b296077532d29757644,},A
nnotations:map[string]string{io.kubernetes.container.hash: fe815856,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7a0526a6-062b-47c9-90a2-a17b73df3857 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.104355589Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=2c77e36d-fa5c-40a1-a821-0e9cc79c58bb name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.104506340Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1702423388233510505,StartedAt:1702423388272805841,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1865df-d2a5-4ebe-be00-20aa7a752e65,},Annotations:map[string]string{io.kubernetes.container.hash: 53d728c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5e1865df-d2a5-4ebe-be00-20aa7a752e65/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5e1865df-d2a5-4ebe-be00-20aa7a752e65/containers/storage-provisioner/0a119102,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5e1865df-d2a5-4ebe-be00-20aa7a752e65/volumes/kubernetes.io~projected/kube-api-access-sn257,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_5e1865df-d2a5-4ebe-be00-20aa7a752e65/storage-provisioner/0.
log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=2c77e36d-fa5c-40a1-a821-0e9cc79c58bb name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.104965356Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=08fddccf-7701-4fbc-9e74-365a3aa05784 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.105087220Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1702423387890311231,StartedAt:1702423387947967400,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.29.0-rc.2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs95k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d936172-0411-4163-a62a-25a11d4ac2f4,},Annotations:map[string]string{io.kubernetes.container.hash: a1ebe0fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5d936172-0411-4163-a62a-25a11d4ac2f4/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5d936172-0411-4163-a62a-25a11d4ac2f4/containers/kube-proxy/297f8026,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubelet/pods/5d936172-0411-4163-a62a-25a11d4ac2f4/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernete
s.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5d936172-0411-4163-a62a-25a11d4ac2f4/volumes/kubernetes.io~projected/kube-api-access-jxrjh,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-proxy-qs95k_5d936172-0411-4163-a62a-25a11d4ac2f4/kube-proxy/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=08fddccf-7701-4fbc-9e74-365a3aa05784 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.105573295Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=e20adbc2-dfad-4b50-99c9-41ddaab8e849 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.105728781Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1702423387387991036,StartedAt:1702423387449674190,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9wxzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1b5bb4-619d-48a2-9c81-060018616240,},Annotations:map[string]string{io.kubernetes.container.hash: 79b5fb15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/6c1b5bb4-619d-48a2-9c81-060018616240/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6c1b5bb4-619d-48a2-9c81-060018616240/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6c1b5bb4-619d-48a2-9c81-060018616240/containers/coredns/2bf2a4e3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/l
ib/kubelet/pods/6c1b5bb4-619d-48a2-9c81-060018616240/volumes/kubernetes.io~projected/kube-api-access-tx5bj,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-76f75df574-9wxzk_6c1b5bb4-619d-48a2-9c81-060018616240/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e20adbc2-dfad-4b50-99c9-41ddaab8e849 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.106655146Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=da997d9b-de84-4872-8e66-57a7afeb5189 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.106729595Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1702423363685109432,StartedAt:1702423365223485073,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.29.0-rc.2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24e7d66089090d7e8a595d9f335e4709,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/24e7d66089090d7e8a595d9f335e4709/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/24e7d66089090d7e8a595d9f335e4709/containers/kube-scheduler/05037004,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-no-preload-115023_24e7d66089090d7e8a595d9f335e4709/kube-scheduler/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=da997d9b-de84-4872-8e66-57a7afeb5189 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.107257188Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=95172eed-f20b-4758-9149-b256896eb32d name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.107337645Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1702423363122768888,StartedAt:1702423363900104920,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.29.0-rc.2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff2739708a59d44f5a39a50cec77f81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8ff2739708a59d44f5a39a50cec77f81/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8ff2739708a59d44f5a39a50cec77f81/containers/kube-controller-manager/661dd632,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_
PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-no-preload-115023_8ff2739708a59d44f5a39a50cec77f81/kube-controller-manager/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=95172eed-f20b-4758-9149-b256896eb32d name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.107747324Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=b2b7baaa-fae5-40c6-8106-4fbe2e402db2 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 12 23:32:12 no-preload-115023 crio[716]: time="2023-12-12 23:32:12.107850455Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1702423363115644720,StartedAt:1702423364050703146,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.29.0-rc.2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1cbd99625f6216cc9339126276ebbf,},Annotations:map[string]string{io.kubernetes.container.hash: ca6bf390,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7e1cbd99625f6216cc9339126276ebbf/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7e1cbd99625f6216cc9339126276ebbf/containers/kube-apiserver/34ae7985,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-no-preload-115023_7e1cbd9
9625f6216cc9339126276ebbf/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b2b7baaa-fae5-40c6-8106-4fbe2e402db2 name=/runtime.v1.RuntimeService/ContainerStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf741185c5b48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   22a91e989de5c       storage-provisioner
	20f1fb49ef910       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   0266489be870b       kube-proxy-qs95k
	590598e80e2c8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   c8703f6b4f020       coredns-76f75df574-9wxzk
	7d7d09efdc52f       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   13f74c4eeaf43       kube-scheduler-no-preload-115023
	32a84a4009e60       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   d5f3d42af47d3       kube-controller-manager-no-preload-115023
	47857508f38da       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   019ea648a9b14       kube-apiserver-no-preload-115023
	44d52798e1c78       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   3bf80270d3dd5       etcd-no-preload-115023
	
	* 
	* ==> coredns [590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-115023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-115023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=no-preload-115023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_22_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:22:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-115023
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:32:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:28:18 +0000   Tue, 12 Dec 2023 23:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:28:18 +0000   Tue, 12 Dec 2023 23:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:28:18 +0000   Tue, 12 Dec 2023 23:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:28:18 +0000   Tue, 12 Dec 2023 23:23:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.32
	  Hostname:    no-preload-115023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 75a7463f23fa499895a4e6f2db6821d6
	  System UUID:                75a7463f-23fa-4998-95a4-e6f2db6821d6
	  Boot ID:                    3fe4d199-2267-4d2a-912b-d0b05050570a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-9wxzk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-115023                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m24s
	  kube-system                 kube-apiserver-no-preload-115023             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-controller-manager-no-preload-115023    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-qs95k                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-115023             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-wlql5              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node no-preload-115023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node no-preload-115023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node no-preload-115023 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node no-preload-115023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node no-preload-115023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node no-preload-115023 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node no-preload-115023 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m11s                  kubelet          Node no-preload-115023 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node no-preload-115023 event: Registered Node no-preload-115023 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076390] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.486206] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.514053] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154305] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.562486] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:17] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.139539] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.157836] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.126203] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.267238] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +30.563335] systemd-fstab-generator[1324]: Ignoring "noauto" for root device
	[ +20.436510] kauditd_printk_skb: 29 callbacks suppressed
	[Dec12 23:22] systemd-fstab-generator[3918]: Ignoring "noauto" for root device
	[  +9.814767] systemd-fstab-generator[4251]: Ignoring "noauto" for root device
	[Dec12 23:23] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59] <==
	* {"level":"info","ts":"2023-12-12T23:22:45.051514Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.32:2380"}
	{"level":"info","ts":"2023-12-12T23:22:45.051555Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.32:2380"}
	{"level":"info","ts":"2023-12-12T23:22:45.051611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 switched to configuration voters=(12642210001372762980)"}
	{"level":"info","ts":"2023-12-12T23:22:45.051774Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"69693fe7a610a475","local-member-id":"af722703d3b6d364","added-peer-id":"af722703d3b6d364","added-peer-peer-urls":["https://192.168.72.32:2380"]}
	{"level":"info","ts":"2023-12-12T23:22:45.494315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:45.494497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:45.494584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 received MsgPreVoteResp from af722703d3b6d364 at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:45.494647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:45.494676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 received MsgVoteResp from af722703d3b6d364 at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:45.494797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:45.494908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: af722703d3b6d364 elected leader af722703d3b6d364 at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:45.499483Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"af722703d3b6d364","local-member-attributes":"{Name:no-preload-115023 ClientURLs:[https://192.168.72.32:2379]}","request-path":"/0/members/af722703d3b6d364/attributes","cluster-id":"69693fe7a610a475","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:22:45.500249Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:22:45.500632Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:45.50084Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:22:45.507283Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:22:45.507333Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:22:45.508891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.32:2379"}
	{"level":"info","ts":"2023-12-12T23:22:45.511737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:22:45.514506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"69693fe7a610a475","local-member-id":"af722703d3b6d364","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:45.514702Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:45.514781Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-12-12T23:23:05.606357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.973216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-115023\" ","response":"range_response_count:1 size:4649"}
	{"level":"info","ts":"2023-12-12T23:23:05.606468Z","caller":"traceutil/trace.go:171","msg":"trace[960256067] range","detail":"{range_begin:/registry/minions/no-preload-115023; range_end:; response_count:1; response_revision:380; }","duration":"152.289745ms","start":"2023-12-12T23:23:05.454158Z","end":"2023-12-12T23:23:05.606448Z","steps":["trace[960256067] 'range keys from in-memory index tree'  (duration: 123.661838ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:23:05.62993Z","caller":"traceutil/trace.go:171","msg":"trace[2116474601] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"114.986844ms","start":"2023-12-12T23:23:05.514818Z","end":"2023-12-12T23:23:05.629804Z","steps":["trace[2116474601] 'process raft request'  (duration: 32.276629ms)","trace[2116474601] 'compare'  (duration: 23.840773ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  23:32:12 up 15 min,  0 users,  load average: 0.29, 0.26, 0.27
	Linux no-preload-115023 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61] <==
	* I1212 23:26:07.295346       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:27:47.718819       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:27:47.719305       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1212 23:27:48.719740       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:27:48.720014       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:27:48.720065       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:27:48.720297       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:27:48.720424       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:27:48.726338       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:28:48.720552       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:28:48.720843       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:28:48.720957       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:28:48.727459       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:28:48.727532       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:28:48.727543       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:30:48.722028       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:30:48.722557       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:30:48.722605       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:30:48.728485       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:30:48.728560       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:30:48.728570       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023] <==
	* I1212 23:26:42.492911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="376.827µs"
	E1212 23:27:03.995058       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:27:04.473920       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:27:34.002049       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:27:34.497710       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:28:04.011465       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:28:04.507483       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:28:34.017508       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:28:34.517622       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:29:04.023311       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:29:04.526733       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:29:21.496951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="395.63µs"
	E1212 23:29:34.030288       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:29:34.540591       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:29:36.488115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="147.749µs"
	E1212 23:30:04.037709       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:30:04.550316       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:30:34.044588       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:30:34.560645       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:31:04.052830       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:31:04.569514       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:31:34.059742       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:31:34.583376       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:32:04.066587       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:32:04.594818       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24] <==
	* I1212 23:23:08.038444       1 server_others.go:72] "Using iptables proxy"
	I1212 23:23:08.052347       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.32"]
	I1212 23:23:08.126097       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 23:23:08.126285       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:23:08.126305       1 server_others.go:168] "Using iptables Proxier"
	I1212 23:23:08.153110       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:23:08.156733       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I1212 23:23:08.156797       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:23:08.168298       1 config.go:188] "Starting service config controller"
	I1212 23:23:08.168335       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:23:08.168400       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:23:08.168407       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:23:08.179397       1 config.go:315] "Starting node config controller"
	I1212 23:23:08.182267       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:23:08.269814       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:23:08.269871       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:23:08.283058       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c] <==
	* W1212 23:22:47.769659       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:22:47.769674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 23:22:47.769765       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:22:47.769781       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:22:47.769844       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:22:47.769859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:22:47.769910       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:22:47.769919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:22:47.769927       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:22:47.769934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:22:48.750761       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:22:48.750884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:22:48.753487       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:22:48.753541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:22:48.833131       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:22:48.833244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:22:48.934841       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:22:48.934904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 23:22:48.939560       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:22:48.939612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 23:22:48.972396       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:22:48.972513       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:22:48.979542       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:22:48.979663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1212 23:22:51.735010       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:16:53 UTC, ends at Tue 2023-12-12 23:32:12 UTC. --
	Dec 12 23:29:36 no-preload-115023 kubelet[4258]: E1212 23:29:36.469081    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:29:48 no-preload-115023 kubelet[4258]: E1212 23:29:48.470753    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:29:51 no-preload-115023 kubelet[4258]: E1212 23:29:51.669663    4258 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:29:51 no-preload-115023 kubelet[4258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:29:51 no-preload-115023 kubelet[4258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:29:51 no-preload-115023 kubelet[4258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:30:03 no-preload-115023 kubelet[4258]: E1212 23:30:03.470706    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:30:14 no-preload-115023 kubelet[4258]: E1212 23:30:14.469948    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:30:27 no-preload-115023 kubelet[4258]: E1212 23:30:27.471146    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:30:39 no-preload-115023 kubelet[4258]: E1212 23:30:39.469529    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:30:51 no-preload-115023 kubelet[4258]: E1212 23:30:51.669322    4258 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:30:51 no-preload-115023 kubelet[4258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:30:51 no-preload-115023 kubelet[4258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:30:51 no-preload-115023 kubelet[4258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:30:52 no-preload-115023 kubelet[4258]: E1212 23:30:52.469428    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:31:03 no-preload-115023 kubelet[4258]: E1212 23:31:03.471000    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:31:14 no-preload-115023 kubelet[4258]: E1212 23:31:14.470444    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:31:27 no-preload-115023 kubelet[4258]: E1212 23:31:27.469874    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:31:42 no-preload-115023 kubelet[4258]: E1212 23:31:42.470285    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:31:51 no-preload-115023 kubelet[4258]: E1212 23:31:51.668960    4258 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:31:51 no-preload-115023 kubelet[4258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:31:51 no-preload-115023 kubelet[4258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:31:51 no-preload-115023 kubelet[4258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:31:57 no-preload-115023 kubelet[4258]: E1212 23:31:57.471076    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:32:10 no-preload-115023 kubelet[4258]: E1212 23:32:10.470322    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	
	* 
	* ==> storage-provisioner [cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a] <==
	* I1212 23:23:08.315071       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:23:08.329948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:23:08.330026       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:23:08.341500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:23:08.341683       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-115023_72168d07-f591-43a6-a19b-99faa045a0e7!
	I1212 23:23:08.347066       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c3b8f1c-2c81-484e-a7d1-59b57e1a15e9", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-115023_72168d07-f591-43a6-a19b-99faa045a0e7 became leader
	I1212 23:23:08.442889       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-115023_72168d07-f591-43a6-a19b-99faa045a0e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-115023 -n no-preload-115023
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-115023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wlql5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-115023 describe pod metrics-server-57f55c9bc5-wlql5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-115023 describe pod metrics-server-57f55c9bc5-wlql5: exit status 1 (67.346481ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wlql5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-115023 describe pod metrics-server-57f55c9bc5-wlql5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 23:24:01.564873   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:24:17.802906   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:25:02.203226   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:25:24.608058   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:25:25.171923   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 23:25:40.852169   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:25:51.771887   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:26:10.128298   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:26:25.248815   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:26:39.568996   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:26:53.067769   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-809120 -n embed-certs-809120
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-12 23:32:18.467465651 +0000 UTC m=+5378.278523094
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-809120 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-809120 logs -n 25: (1.624669033s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-828988 sudo cat                              | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo find                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo crio                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-828988                                       | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-685244 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | disable-driver-mounts-685244                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:09 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-809120            | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-549640        | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-115023             | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850839  | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-809120                 | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-549640             | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-115023                  | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850839       | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:22 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:12:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:12:31.006246  128282 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:12:31.006380  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006389  128282 out.go:309] Setting ErrFile to fd 2...
	I1212 23:12:31.006393  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006549  128282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:12:31.007106  128282 out.go:303] Setting JSON to false
	I1212 23:12:31.008035  128282 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14105,"bootTime":1702408646,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:12:31.008097  128282 start.go:138] virtualization: kvm guest
	I1212 23:12:31.010317  128282 out.go:177] * [default-k8s-diff-port-850839] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:12:31.011782  128282 notify.go:220] Checking for updates...
	I1212 23:12:31.011787  128282 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:12:31.013177  128282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:12:31.014626  128282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:12:31.016153  128282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:12:31.017420  128282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:12:31.018789  128282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:12:31.020548  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:12:31.021022  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.021073  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.036337  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I1212 23:12:31.036724  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.037285  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.037315  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.037677  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.037910  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.038190  128282 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:12:31.038482  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.038521  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.052455  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
	I1212 23:12:31.052897  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.053408  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.053428  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.053842  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.054041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.090916  128282 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:12:31.092159  128282 start.go:298] selected driver: kvm2
	I1212 23:12:31.092174  128282 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.092313  128282 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:12:31.092991  128282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.093081  128282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:12:31.108612  128282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:12:31.108979  128282 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:12:31.109050  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:12:31.109064  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:12:31.109078  128282 start_flags.go:323] config:
	{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85083
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.109261  128282 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.110991  128282 out.go:177] * Starting control plane node default-k8s-diff-port-850839 in cluster default-k8s-diff-port-850839
	I1212 23:12:28.611488  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:31.112184  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:12:31.112223  128282 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:12:31.112231  128282 cache.go:56] Caching tarball of preloaded images
	I1212 23:12:31.112315  128282 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:12:31.112331  128282 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:12:31.112435  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:12:31.112621  128282 start.go:365] acquiring machines lock for default-k8s-diff-port-850839: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:12:34.691505  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:37.763538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:43.843515  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:46.915553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:52.995487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:56.067468  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:02.147575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:05.219586  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:11.299553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:14.371547  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:20.451538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:23.523565  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:29.603544  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:32.675516  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:38.755580  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:41.827595  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:47.907601  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:50.979707  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:57.059532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:00.131511  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:06.211489  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:09.283534  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:15.363535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:18.435583  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:24.515478  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:27.587546  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:33.667567  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:36.739532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:42.819531  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:45.891616  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:51.971509  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:55.043560  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:01.123510  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:04.195575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:10.275535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:13.347520  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:19.427542  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:22.499524  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:28.579575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:31.651552  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:37.731535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:40.803533  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:46.883561  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:49.955571  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:56.035557  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:59.107536  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:05.187487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:08.259527  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:14.339497  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:17.411598  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:20.416121  127900 start.go:369] acquired machines lock for "old-k8s-version-549640" in 4m27.702597236s
	I1212 23:16:20.416185  127900 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:20.416197  127900 fix.go:54] fixHost starting: 
	I1212 23:16:20.416598  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:20.416638  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:20.431626  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I1212 23:16:20.432088  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:20.432550  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:16:20.432573  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:20.432976  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:20.433174  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:20.433352  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:16:20.435450  127900 fix.go:102] recreateIfNeeded on old-k8s-version-549640: state=Stopped err=<nil>
	I1212 23:16:20.435477  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	W1212 23:16:20.435650  127900 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:20.437467  127900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-549640" ...
	I1212 23:16:20.438890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Start
	I1212 23:16:20.439060  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring networks are active...
	I1212 23:16:20.439992  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network default is active
	I1212 23:16:20.440387  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network mk-old-k8s-version-549640 is active
	I1212 23:16:20.440738  127900 main.go:141] libmachine: (old-k8s-version-549640) Getting domain xml...
	I1212 23:16:20.441435  127900 main.go:141] libmachine: (old-k8s-version-549640) Creating domain...
	I1212 23:16:21.692826  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting to get IP...
	I1212 23:16:21.693784  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.694269  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.694313  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.694229  128878 retry.go:31] will retry after 250.302126ms: waiting for machine to come up
	I1212 23:16:21.945651  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.946122  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.946145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.946067  128878 retry.go:31] will retry after 271.460868ms: waiting for machine to come up
	I1212 23:16:22.219848  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.220326  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.220352  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.220248  128878 retry.go:31] will retry after 466.723624ms: waiting for machine to come up
	I1212 23:16:20.413611  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:20.413648  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:16:20.415967  127760 machine.go:91] provisioned docker machine in 4m37.407647774s
	I1212 23:16:20.416013  127760 fix.go:56] fixHost completed within 4m37.429684827s
	I1212 23:16:20.416025  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 4m37.429713708s
	W1212 23:16:20.416055  127760 start.go:694] error starting host: provision: host is not running
	W1212 23:16:20.416230  127760 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 23:16:20.416241  127760 start.go:709] Will try again in 5 seconds ...
	I1212 23:16:22.689020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.689524  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.689559  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.689474  128878 retry.go:31] will retry after 384.986526ms: waiting for machine to come up
	I1212 23:16:23.076020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.076428  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.076462  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.076365  128878 retry.go:31] will retry after 673.784203ms: waiting for machine to come up
	I1212 23:16:23.752374  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.752825  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.752859  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.752777  128878 retry.go:31] will retry after 744.371791ms: waiting for machine to come up
	I1212 23:16:24.498624  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:24.499057  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:24.499088  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:24.498994  128878 retry.go:31] will retry after 1.095766265s: waiting for machine to come up
	I1212 23:16:25.596742  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:25.597192  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:25.597217  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:25.597133  128878 retry.go:31] will retry after 1.340596782s: waiting for machine to come up
	I1212 23:16:26.939593  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:26.939933  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:26.939957  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:26.939881  128878 retry.go:31] will retry after 1.546075974s: waiting for machine to come up
	I1212 23:16:25.417922  127760 start.go:365] acquiring machines lock for embed-certs-809120: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:16:28.488543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:28.488923  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:28.488949  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:28.488883  128878 retry.go:31] will retry after 2.06517547s: waiting for machine to come up
	I1212 23:16:30.555809  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:30.556300  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:30.556330  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:30.556262  128878 retry.go:31] will retry after 2.237409729s: waiting for machine to come up
	I1212 23:16:32.796273  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:32.796684  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:32.796712  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:32.796629  128878 retry.go:31] will retry after 3.535954383s: waiting for machine to come up
	I1212 23:16:36.333758  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:36.334211  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:36.334243  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:36.334143  128878 retry.go:31] will retry after 3.820382113s: waiting for machine to come up
	I1212 23:16:41.367963  128156 start.go:369] acquired machines lock for "no-preload-115023" in 4m21.778030837s
	I1212 23:16:41.368034  128156 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:41.368046  128156 fix.go:54] fixHost starting: 
	I1212 23:16:41.368459  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:41.368498  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:41.384557  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1212 23:16:41.385004  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:41.385448  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:16:41.385471  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:41.385799  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:41.386007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:16:41.386192  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:16:41.387807  128156 fix.go:102] recreateIfNeeded on no-preload-115023: state=Stopped err=<nil>
	I1212 23:16:41.387858  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	W1212 23:16:41.388030  128156 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:41.390189  128156 out.go:177] * Restarting existing kvm2 VM for "no-preload-115023" ...
	I1212 23:16:40.159111  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159503  127900 main.go:141] libmachine: (old-k8s-version-549640) Found IP for machine: 192.168.61.146
	I1212 23:16:40.159530  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserving static IP address...
	I1212 23:16:40.159543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has current primary IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159970  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.160042  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | skip adding static IP to network mk-old-k8s-version-549640 - found existing host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"}
	I1212 23:16:40.160060  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserved static IP address: 192.168.61.146
	I1212 23:16:40.160072  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for SSH to be available...
	I1212 23:16:40.160087  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Getting to WaitForSSH function...
	I1212 23:16:40.162048  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162377  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.162417  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162498  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH client type: external
	I1212 23:16:40.162571  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa (-rw-------)
	I1212 23:16:40.162609  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:16:40.162629  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | About to run SSH command:
	I1212 23:16:40.162644  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | exit 0
	I1212 23:16:40.254804  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | SSH cmd err, output: <nil>: 
	I1212 23:16:40.255235  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetConfigRaw
	I1212 23:16:40.255885  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.258196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258526  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.258551  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258806  127900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/config.json ...
	I1212 23:16:40.259036  127900 machine.go:88] provisioning docker machine ...
	I1212 23:16:40.259059  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:40.259292  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259454  127900 buildroot.go:166] provisioning hostname "old-k8s-version-549640"
	I1212 23:16:40.259475  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259624  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.261311  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261561  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.261583  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261686  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.261818  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.261974  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.262114  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.262270  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.262645  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.262666  127900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-549640 && echo "old-k8s-version-549640" | sudo tee /etc/hostname
	I1212 23:16:40.395342  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-549640
	
	I1212 23:16:40.395376  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.398008  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398391  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.398430  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398533  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.398716  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.398890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.399024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.399152  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.399489  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.399510  127900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-549640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-549640/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-549640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:40.526781  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:40.526824  127900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:16:40.526847  127900 buildroot.go:174] setting up certificates
	I1212 23:16:40.526859  127900 provision.go:83] configureAuth start
	I1212 23:16:40.526877  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.527276  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.530483  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.530876  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.530908  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.531162  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.533161  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533456  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.533488  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533567  127900 provision.go:138] copyHostCerts
	I1212 23:16:40.533625  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:16:40.533645  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:16:40.533711  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:16:40.533799  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:16:40.533806  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:16:40.533829  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:16:40.533882  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:16:40.533889  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:16:40.533913  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:16:40.533957  127900 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-549640 san=[192.168.61.146 192.168.61.146 localhost 127.0.0.1 minikube old-k8s-version-549640]
	I1212 23:16:40.630542  127900 provision.go:172] copyRemoteCerts
	I1212 23:16:40.630611  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:40.630639  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.633145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633408  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.633433  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633579  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.633790  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.633944  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.634162  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:40.725498  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 23:16:40.748097  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:16:40.769852  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:16:40.791381  127900 provision.go:86] duration metric: configureAuth took 264.501961ms
	I1212 23:16:40.791417  127900 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:40.791602  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:16:40.791678  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.794113  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794479  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.794514  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794653  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.794864  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795055  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795234  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.795443  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.795777  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.795807  127900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:16:41.103469  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:16:41.103503  127900 machine.go:91] provisioned docker machine in 844.450063ms
	I1212 23:16:41.103517  127900 start.go:300] post-start starting for "old-k8s-version-549640" (driver="kvm2")
	I1212 23:16:41.103527  127900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:41.103547  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.103894  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:41.103923  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.106459  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.106835  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.106864  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.107013  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.107190  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.107363  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.107532  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.201177  127900 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:41.205686  127900 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:41.205711  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:16:41.205773  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:16:41.205862  127900 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:16:41.205970  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:41.214591  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:41.240854  127900 start.go:303] post-start completed in 137.32025ms
	I1212 23:16:41.240885  127900 fix.go:56] fixHost completed within 20.824687398s
	I1212 23:16:41.240915  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.243633  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244071  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.244104  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244300  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.244517  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244651  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244806  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.244981  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:41.245337  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:41.245350  127900 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:16:41.367815  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423001.317394085
	
	I1212 23:16:41.367837  127900 fix.go:206] guest clock: 1702423001.317394085
	I1212 23:16:41.367844  127900 fix.go:219] Guest: 2023-12-12 23:16:41.317394085 +0000 UTC Remote: 2023-12-12 23:16:41.240889292 +0000 UTC m=+288.685284781 (delta=76.504793ms)
	I1212 23:16:41.367863  127900 fix.go:190] guest clock delta is within tolerance: 76.504793ms
	I1212 23:16:41.367868  127900 start.go:83] releasing machines lock for "old-k8s-version-549640", held for 20.951706122s
	I1212 23:16:41.367895  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.368219  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:41.370769  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371172  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.371196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371378  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.371904  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372069  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372157  127900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:16:41.372206  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.372409  127900 ssh_runner.go:195] Run: cat /version.json
	I1212 23:16:41.372438  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.374847  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.374869  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375341  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375373  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375401  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375419  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375526  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375661  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375749  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.375835  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.376026  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376031  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.488636  127900 ssh_runner.go:195] Run: systemctl --version
	I1212 23:16:41.494315  127900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:16:41.645474  127900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:16:41.652912  127900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:16:41.652988  127900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:16:41.667662  127900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:16:41.667680  127900 start.go:475] detecting cgroup driver to use...
	I1212 23:16:41.667747  127900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:16:41.681625  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:16:41.693475  127900 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:16:41.693540  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:16:41.705743  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:16:41.719152  127900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:16:41.819641  127900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:16:41.929543  127900 docker.go:219] disabling docker service ...
	I1212 23:16:41.929617  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:16:41.943407  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:16:41.955372  127900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:16:42.063078  127900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:16:42.177422  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:16:42.192994  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:16:42.211887  127900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 23:16:42.211943  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.223418  127900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:16:42.223486  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.234905  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.245973  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.261016  127900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:16:42.272819  127900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:16:42.283308  127900 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:16:42.283381  127900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:16:42.296365  127900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:16:42.307038  127900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:16:42.412672  127900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:16:42.590363  127900 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:16:42.590470  127900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:16:42.596285  127900 start.go:543] Will wait 60s for crictl version
	I1212 23:16:42.596360  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:42.600633  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:16:42.638709  127900 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:16:42.638811  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.694435  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.750327  127900 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 23:16:41.391501  128156 main.go:141] libmachine: (no-preload-115023) Calling .Start
	I1212 23:16:41.391671  128156 main.go:141] libmachine: (no-preload-115023) Ensuring networks are active...
	I1212 23:16:41.392314  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network default is active
	I1212 23:16:41.392624  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network mk-no-preload-115023 is active
	I1212 23:16:41.393075  128156 main.go:141] libmachine: (no-preload-115023) Getting domain xml...
	I1212 23:16:41.393720  128156 main.go:141] libmachine: (no-preload-115023) Creating domain...
	I1212 23:16:42.669200  128156 main.go:141] libmachine: (no-preload-115023) Waiting to get IP...
	I1212 23:16:42.670068  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.670482  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.670582  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.670462  128998 retry.go:31] will retry after 201.350715ms: waiting for machine to come up
	I1212 23:16:42.874061  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.874543  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.874576  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.874492  128998 retry.go:31] will retry after 331.205906ms: waiting for machine to come up
	I1212 23:16:43.207045  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.207590  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.207618  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.207533  128998 retry.go:31] will retry after 343.139691ms: waiting for machine to come up
	I1212 23:16:43.552253  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.552737  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.552769  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.552683  128998 retry.go:31] will retry after 606.192126ms: waiting for machine to come up
	I1212 23:16:44.160409  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.160877  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.160923  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.160842  128998 retry.go:31] will retry after 713.164162ms: waiting for machine to come up
	I1212 23:16:42.751897  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:42.754490  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.754832  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:42.754867  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.755047  127900 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 23:16:42.759290  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:42.770851  127900 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 23:16:42.770913  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:42.822484  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:42.822559  127900 ssh_runner.go:195] Run: which lz4
	I1212 23:16:42.826907  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:16:42.831601  127900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:16:42.831633  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 23:16:44.643588  127900 crio.go:444] Took 1.816704 seconds to copy over tarball
	I1212 23:16:44.643671  127900 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:16:47.603870  127900 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960150759s)
	I1212 23:16:47.603904  127900 crio.go:451] Took 2.960288 seconds to extract the tarball
	I1212 23:16:47.603918  127900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:16:44.875548  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.875971  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.876003  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.875908  128998 retry.go:31] will retry after 928.762857ms: waiting for machine to come up
	I1212 23:16:45.806556  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:45.806983  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:45.807019  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:45.806932  128998 retry.go:31] will retry after 945.322601ms: waiting for machine to come up
	I1212 23:16:46.754374  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:46.754834  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:46.754869  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:46.754818  128998 retry.go:31] will retry after 1.373584303s: waiting for machine to come up
	I1212 23:16:48.130434  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:48.130917  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:48.130950  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:48.130870  128998 retry.go:31] will retry after 1.683447661s: waiting for machine to come up
	I1212 23:16:47.644193  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:47.696129  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:47.696156  127900 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.696314  127900 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.696273  127900 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.696242  127900 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.696306  127900 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.696371  127900 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.696445  127900 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 23:16:47.697649  127900 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.697713  127900 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.697816  127900 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.697955  127900 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 23:16:47.698013  127900 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.698109  127900 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.698124  127900 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.698341  127900 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.888397  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.897712  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.897790  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.910016  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 23:16:47.911074  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.912891  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.923071  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.995042  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:48.022161  127900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 23:16:48.022215  127900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.022270  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053132  127900 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 23:16:48.053181  127900 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.053236  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053493  127900 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 23:16:48.053531  127900 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.053588  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.123888  127900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 23:16:48.123949  127900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.123889  127900 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 23:16:48.124009  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124022  127900 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 23:16:48.124077  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124089  127900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 23:16:48.124111  127900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 23:16:48.124141  127900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.124171  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124115  127900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.124249  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.205456  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.205488  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.205609  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.205650  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.205702  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 23:16:48.205789  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.205814  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.351665  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 23:16:48.351700  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 23:16:48.360026  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 23:16:48.363255  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 23:16:48.363297  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 23:16:48.363376  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 23:16:48.363413  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 23:16:48.363525  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369271  127900 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 23:16:48.369289  127900 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369326  127900 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 23:16:50.628595  127900 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.259242667s)
	I1212 23:16:50.628629  127900 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 23:16:50.628679  127900 cache_images.go:92] LoadImages completed in 2.932510127s
	W1212 23:16:50.628774  127900 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 23:16:50.628871  127900 ssh_runner.go:195] Run: crio config
	I1212 23:16:50.696623  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:16:50.696645  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:16:50.696665  127900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:16:50.696690  127900 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.146 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-549640 NodeName:old-k8s-version-549640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 23:16:50.696857  127900 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-549640"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-549640
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.146:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:16:50.696950  127900 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-549640 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:16:50.697013  127900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 23:16:50.706222  127900 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:16:50.706309  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:16:50.714679  127900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 23:16:50.732119  127900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:16:50.749596  127900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 23:16:50.766445  127900 ssh_runner.go:195] Run: grep 192.168.61.146	control-plane.minikube.internal$ /etc/hosts
	I1212 23:16:50.770611  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:50.783162  127900 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640 for IP: 192.168.61.146
	I1212 23:16:50.783205  127900 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:16:50.783434  127900 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:16:50.783504  127900 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:16:50.783623  127900 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.key
	I1212 23:16:50.783701  127900 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key.a124ebb4
	I1212 23:16:50.783781  127900 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key
	I1212 23:16:50.784002  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:16:50.784053  127900 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:16:50.784070  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:16:50.784118  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:16:50.784162  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:16:50.784201  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:16:50.784260  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:50.785202  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:16:50.813072  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:16:50.838714  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:16:50.863302  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:16:50.891365  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:16:50.916623  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:16:50.946894  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:16:50.974859  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:16:51.002629  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:16:51.027782  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:16:51.052384  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:16:51.077430  127900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:16:51.094703  127900 ssh_runner.go:195] Run: openssl version
	I1212 23:16:51.100625  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:16:51.111038  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116246  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116342  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.122069  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:16:51.132325  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:16:51.142392  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147278  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147353  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.153446  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:16:51.163491  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:16:51.173393  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178482  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178560  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.184710  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:16:51.194819  127900 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:16:51.199808  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:16:51.206208  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:16:51.212498  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:16:51.218555  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:16:51.224923  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:16:51.231298  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:16:51.237570  127900 kubeadm.go:404] StartCluster: {Name:old-k8s-version-549640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:16:51.237672  127900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:16:51.237752  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:16:51.283890  127900 cri.go:89] found id: ""
	I1212 23:16:51.283985  127900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:16:51.296861  127900 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:16:51.296897  127900 kubeadm.go:636] restartCluster start
	I1212 23:16:51.296990  127900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:16:51.306034  127900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.307730  127900 kubeconfig.go:92] found "old-k8s-version-549640" server: "https://192.168.61.146:8443"
	I1212 23:16:51.311721  127900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:16:51.320683  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.320831  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.332122  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.332145  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.332197  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.342755  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.843464  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.843575  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.854933  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:52.343493  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.343579  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.354884  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:49.816605  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:49.816934  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:49.816968  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:49.816881  128998 retry.go:31] will retry after 1.775884699s: waiting for machine to come up
	I1212 23:16:51.594388  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:51.594915  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:51.594952  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:51.594866  128998 retry.go:31] will retry after 1.948886075s: waiting for machine to come up
	I1212 23:16:53.546035  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:53.546503  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:53.546538  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:53.546441  128998 retry.go:31] will retry after 3.530621748s: waiting for machine to come up
	I1212 23:16:52.842987  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.843085  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.854637  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.343155  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.343261  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.354960  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.843482  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.843555  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.854488  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.342926  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.343028  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.357489  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.843024  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.843111  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.854764  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.343252  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.343363  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.354798  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.843831  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.843931  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.855077  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.343753  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.343827  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.354659  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.843304  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.843423  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.854727  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.343292  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.343428  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.354360  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.078854  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:57.079265  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:57.079287  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:57.079224  128998 retry.go:31] will retry after 3.552473985s: waiting for machine to come up
	I1212 23:17:01.924642  128282 start.go:369] acquired machines lock for "default-k8s-diff-port-850839" in 4m30.811975302s
	I1212 23:17:01.924716  128282 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:01.924725  128282 fix.go:54] fixHost starting: 
	I1212 23:17:01.925164  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:01.925207  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:01.942895  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I1212 23:17:01.943340  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:01.943906  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:01.943938  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:01.944371  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:01.944594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:01.944819  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:01.946719  128282 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850839: state=Stopped err=<nil>
	I1212 23:17:01.946759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	W1212 23:17:01.946947  128282 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:01.949597  128282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850839" ...
	I1212 23:16:57.843410  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.843484  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.854821  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.343379  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.343470  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.354868  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.843473  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.843594  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.854752  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.343324  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.343432  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.354442  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.842979  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.843086  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.854537  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.343125  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.343201  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.354401  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.843565  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.843642  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.854663  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:01.321433  127900 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:01.321466  127900 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:01.321477  127900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:01.321534  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:01.361643  127900 cri.go:89] found id: ""
	I1212 23:17:01.361739  127900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:01.380002  127900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:01.388875  127900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:01.388944  127900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397644  127900 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397690  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:01.528111  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:00.635998  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636444  128156 main.go:141] libmachine: (no-preload-115023) Found IP for machine: 192.168.72.32
	I1212 23:17:00.636462  128156 main.go:141] libmachine: (no-preload-115023) Reserving static IP address...
	I1212 23:17:00.636478  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has current primary IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.636925  128156 main.go:141] libmachine: (no-preload-115023) DBG | skip adding static IP to network mk-no-preload-115023 - found existing host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"}
	I1212 23:17:00.636939  128156 main.go:141] libmachine: (no-preload-115023) Reserved static IP address: 192.168.72.32
	I1212 23:17:00.636961  128156 main.go:141] libmachine: (no-preload-115023) Waiting for SSH to be available...
	I1212 23:17:00.636971  128156 main.go:141] libmachine: (no-preload-115023) DBG | Getting to WaitForSSH function...
	I1212 23:17:00.639074  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639400  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.639443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639546  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH client type: external
	I1212 23:17:00.639586  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa (-rw-------)
	I1212 23:17:00.639629  128156 main.go:141] libmachine: (no-preload-115023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:00.639644  128156 main.go:141] libmachine: (no-preload-115023) DBG | About to run SSH command:
	I1212 23:17:00.639663  128156 main.go:141] libmachine: (no-preload-115023) DBG | exit 0
	I1212 23:17:00.734735  128156 main.go:141] libmachine: (no-preload-115023) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:00.735132  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetConfigRaw
	I1212 23:17:00.735813  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:00.738429  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.738828  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.738871  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.739049  128156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/config.json ...
	I1212 23:17:00.739276  128156 machine.go:88] provisioning docker machine ...
	I1212 23:17:00.739299  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:00.739537  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739695  128156 buildroot.go:166] provisioning hostname "no-preload-115023"
	I1212 23:17:00.739717  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739879  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.742096  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742404  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.742443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742591  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.742756  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.742925  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.743067  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.743224  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.743733  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.743751  128156 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-115023 && echo "no-preload-115023" | sudo tee /etc/hostname
	I1212 23:17:00.888573  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-115023
	
	I1212 23:17:00.888610  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.891302  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891619  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.891664  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891852  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.892092  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892263  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892419  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.892584  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.892911  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.892930  128156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-115023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-115023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-115023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:01.032180  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:01.032222  128156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:01.032257  128156 buildroot.go:174] setting up certificates
	I1212 23:17:01.032273  128156 provision.go:83] configureAuth start
	I1212 23:17:01.032291  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:01.032653  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.035024  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035334  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.035361  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035494  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.037594  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.037898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.037930  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.038066  128156 provision.go:138] copyHostCerts
	I1212 23:17:01.038122  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:01.038143  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:01.038202  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:01.038322  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:01.038334  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:01.038355  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:01.038470  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:01.038481  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:01.038499  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:01.038575  128156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.no-preload-115023 san=[192.168.72.32 192.168.72.32 localhost 127.0.0.1 minikube no-preload-115023]
	I1212 23:17:01.146965  128156 provision.go:172] copyRemoteCerts
	I1212 23:17:01.147027  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:01.147053  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.149326  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149621  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.149656  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149774  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.149969  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.150118  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.150238  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.244271  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:01.267206  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 23:17:01.289286  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:01.311940  128156 provision.go:86] duration metric: configureAuth took 279.648376ms
	I1212 23:17:01.311970  128156 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:01.312144  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:17:01.312229  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.314543  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.314881  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.314907  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.315055  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.315281  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315469  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315658  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.315821  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.316162  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.316185  128156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:01.644687  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:01.644737  128156 machine.go:91] provisioned docker machine in 905.44182ms
	I1212 23:17:01.644750  128156 start.go:300] post-start starting for "no-preload-115023" (driver="kvm2")
	I1212 23:17:01.644764  128156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:01.644781  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.645148  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:01.645186  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.647976  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648333  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.648369  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648572  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.648769  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.648972  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.649102  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.746191  128156 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:01.750374  128156 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:01.750416  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:01.750499  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:01.750605  128156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:01.750721  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:01.760389  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:01.788014  128156 start.go:303] post-start completed in 143.244652ms
	I1212 23:17:01.788052  128156 fix.go:56] fixHost completed within 20.420006869s
	I1212 23:17:01.788083  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.790868  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791357  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.791392  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791675  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.791911  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792276  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.792463  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.792889  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.792903  128156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:01.924437  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423021.865464875
	
	I1212 23:17:01.924464  128156 fix.go:206] guest clock: 1702423021.865464875
	I1212 23:17:01.924477  128156 fix.go:219] Guest: 2023-12-12 23:17:01.865464875 +0000 UTC Remote: 2023-12-12 23:17:01.788058057 +0000 UTC m=+282.352654726 (delta=77.406818ms)
	I1212 23:17:01.924532  128156 fix.go:190] guest clock delta is within tolerance: 77.406818ms
	I1212 23:17:01.924542  128156 start.go:83] releasing machines lock for "no-preload-115023", held for 20.556534447s
	I1212 23:17:01.924581  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.924871  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.927873  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928206  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.928238  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928450  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929098  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929301  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929387  128156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:01.929448  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.929516  128156 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:01.929559  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.932560  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.932593  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933001  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933031  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933059  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933081  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933340  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933430  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933547  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933659  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933919  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.933923  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.934097  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.934170  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:02.029559  128156 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:02.056382  128156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:02.199375  128156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:02.207131  128156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:02.207208  128156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:02.227083  128156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:02.227111  128156 start.go:475] detecting cgroup driver to use...
	I1212 23:17:02.227174  128156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:02.241611  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:02.253610  128156 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:02.253675  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:02.266973  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:02.280712  128156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:02.406583  128156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:02.548155  128156 docker.go:219] disabling docker service ...
	I1212 23:17:02.548235  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:02.563410  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:02.575968  128156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:02.697146  128156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:02.828963  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:02.842559  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:02.865357  128156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:02.865433  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.878154  128156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:02.878231  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.892188  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.903286  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.915201  128156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:02.927665  128156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:02.938466  128156 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:02.938538  128156 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:02.954428  128156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:02.966197  128156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:03.109663  128156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:03.322982  128156 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:03.323068  128156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:03.329800  128156 start.go:543] Will wait 60s for crictl version
	I1212 23:17:03.329866  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.335779  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:03.385099  128156 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:03.385190  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.438085  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.482280  128156 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 23:17:03.483965  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:03.487086  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487464  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:03.487495  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487694  128156 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:03.492027  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:03.506463  128156 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:17:03.506503  128156 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:03.544301  128156 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 23:17:03.544329  128156 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:17:03.544386  128156 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.544441  128156 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.544474  128156 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.544440  128156 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.544509  128156 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.544527  128156 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 23:17:03.545656  128156 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.545678  128156 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.545726  128156 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.545657  128156 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.545747  128156 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.545758  128156 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.545662  128156 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 23:17:03.546098  128156 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.724978  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.727403  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.739085  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 23:17:03.747535  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.748286  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.780484  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.826808  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.834529  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.840840  128156 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 23:17:03.840893  128156 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.840940  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.868056  128156 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 23:17:03.868106  128156 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.868157  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.043948  128156 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 23:17:04.044014  128156 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.044063  128156 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 23:17:04.044102  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044167  128156 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 23:17:04.044207  128156 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.044252  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044103  128156 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.044334  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044375  128156 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 23:17:04.044401  128156 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.044444  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:04.044446  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044489  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:04.044401  128156 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 23:17:04.044520  128156 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.044545  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.065308  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.065326  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.065380  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.065495  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.065541  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.167939  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.168062  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.207196  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.207344  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.261679  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 23:17:04.261767  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:04.293250  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 23:17:04.293382  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:04.298843  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.298927  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.298960  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.299043  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.299066  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 23:17:04.299125  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:04.299187  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299201  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.299219  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299272  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.302178  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 23:17:04.302502  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 23:17:04.311377  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 23:17:04.311421  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 23:17:01.950988  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Start
	I1212 23:17:01.951206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring networks are active...
	I1212 23:17:01.952109  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network default is active
	I1212 23:17:01.952459  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network mk-default-k8s-diff-port-850839 is active
	I1212 23:17:01.953041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Getting domain xml...
	I1212 23:17:01.953769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Creating domain...
	I1212 23:17:03.377195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting to get IP...
	I1212 23:17:03.378157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378619  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378696  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.378589  129129 retry.go:31] will retry after 235.08446ms: waiting for machine to come up
	I1212 23:17:03.614763  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615258  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615288  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.615169  129129 retry.go:31] will retry after 349.415903ms: waiting for machine to come up
	I1212 23:17:03.965990  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966570  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966670  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.966628  129129 retry.go:31] will retry after 318.332956ms: waiting for machine to come up
	I1212 23:17:04.286225  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286728  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.286676  129129 retry.go:31] will retry after 554.258457ms: waiting for machine to come up
	I1212 23:17:04.843362  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843928  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843975  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.843882  129129 retry.go:31] will retry after 539.399246ms: waiting for machine to come up
	I1212 23:17:05.384807  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385237  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385267  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:05.385213  129129 retry.go:31] will retry after 793.160743ms: waiting for machine to come up
	I1212 23:17:02.653275  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125123388s)
	I1212 23:17:02.653305  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:02.888884  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.005743  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.124339  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:03.124427  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.154719  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.679193  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.179381  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.678654  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.701429  127900 api_server.go:72] duration metric: took 1.577102613s to wait for apiserver process to appear ...
	I1212 23:17:04.701456  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:04.701476  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:06.586652  128156 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.287578103s)
	I1212 23:17:06.586693  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 23:17:06.586710  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.28741029s)
	I1212 23:17:06.586731  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 23:17:06.586768  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:06.586859  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:09.053122  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.466228622s)
	I1212 23:17:09.053156  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 23:17:09.053187  128156 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:09.053239  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:06.180206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180792  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180826  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:06.180767  129129 retry.go:31] will retry after 1.183884482s: waiting for machine to come up
	I1212 23:17:07.365977  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366537  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:07.366465  129129 retry.go:31] will retry after 1.171346567s: waiting for machine to come up
	I1212 23:17:08.539985  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540457  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540493  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:08.540397  129129 retry.go:31] will retry after 1.176896883s: waiting for machine to come up
	I1212 23:17:09.718657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719110  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719142  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:09.719045  129129 retry.go:31] will retry after 2.075378734s: waiting for machine to come up
	I1212 23:17:09.703531  127900 api_server.go:269] stopped: https://192.168.61.146:8443/healthz: Get "https://192.168.61.146:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 23:17:09.703600  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:10.880325  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:10.880391  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:11.380886  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.408357  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.408420  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:11.880867  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.888735  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.888783  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:12.381393  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:12.390271  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:12.399780  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:12.399818  127900 api_server.go:131] duration metric: took 7.698353874s to wait for apiserver health ...
	I1212 23:17:12.399832  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:17:12.399842  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:12.401614  127900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:12.403088  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:12.416722  127900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:12.439451  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:12.452826  127900 system_pods.go:59] 7 kube-system pods found
	I1212 23:17:12.452870  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:12.452879  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:12.452886  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:12.452893  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Pending
	I1212 23:17:12.452901  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:12.452907  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:12.452914  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:12.452924  127900 system_pods.go:74] duration metric: took 13.446573ms to wait for pod list to return data ...
	I1212 23:17:12.452937  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:12.459638  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:12.459679  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:12.459697  127900 node_conditions.go:105] duration metric: took 6.754094ms to run NodePressure ...
	I1212 23:17:12.459722  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:12.767529  127900 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775696  127900 kubeadm.go:787] kubelet initialised
	I1212 23:17:12.775720  127900 kubeadm.go:788] duration metric: took 8.16519ms waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775730  127900 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:12.781477  127900 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.789136  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789163  127900 pod_ready.go:81] duration metric: took 7.661481ms waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.789174  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789183  127900 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.794618  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794658  127900 pod_ready.go:81] duration metric: took 5.45869ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.794671  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794689  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.801021  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801052  127900 pod_ready.go:81] duration metric: took 6.346779ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.801065  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801074  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.845211  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845243  127900 pod_ready.go:81] duration metric: took 44.152184ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.845256  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845263  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.244325  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244373  127900 pod_ready.go:81] duration metric: took 399.10083ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.244387  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244403  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.644414  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644512  127900 pod_ready.go:81] duration metric: took 400.062676ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.644545  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644566  127900 pod_ready.go:38] duration metric: took 868.822745ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:13.644601  127900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:13.674724  127900 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:13.674813  127900 kubeadm.go:640] restartCluster took 22.377904832s
	I1212 23:17:13.674838  127900 kubeadm.go:406] StartCluster complete in 22.437279451s
	I1212 23:17:13.674872  127900 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.674959  127900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:13.677846  127900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.680423  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:13.680690  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:17:13.680746  127900 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:13.680815  127900 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-549640"
	I1212 23:17:13.680839  127900 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-549640"
	W1212 23:17:13.680850  127900 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:13.680938  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.681342  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.681377  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.681658  127900 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-549640"
	I1212 23:17:13.681702  127900 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-549640"
	W1212 23:17:13.681711  127900 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:13.681780  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.682200  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.682237  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.682462  127900 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-549640"
	I1212 23:17:13.682544  127900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-549640"
	I1212 23:17:13.683062  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.683126  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.702138  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1212 23:17:13.702380  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I1212 23:17:13.702684  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702944  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702956  127900 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-549640" context rescaled to 1 replicas
	I1212 23:17:13.702990  127900 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:13.704074  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.704211  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.706640  127900 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:13.708293  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:13.706664  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706671  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706806  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I1212 23:17:13.709240  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.709383  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.709441  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.709852  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.709874  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.710209  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.710818  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.710867  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.711123  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.711765  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.711842  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.717964  127900 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-549640"
	W1212 23:17:13.717989  127900 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:13.718020  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.718447  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.718493  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.738529  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1212 23:17:13.739214  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.739827  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.739854  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.740246  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.740847  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.740917  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.747710  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I1212 23:17:13.748150  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.748772  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.748793  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.749177  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.749348  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.749413  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I1212 23:17:13.750144  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.751385  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.753201  127900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:13.754814  127900 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:13.754827  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:13.754840  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.754702  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.754893  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.756310  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.756707  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.758906  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.758937  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.758961  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.760001  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.760051  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.760288  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.763360  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.763607  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.770081  127900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:10.003107  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 23:17:10.003162  128156 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:10.003218  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:12.288548  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.285296733s)
	I1212 23:17:12.288591  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 23:17:12.288623  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:12.288674  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:13.771543  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:13.771565  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:13.769624  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I1212 23:17:13.771589  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.772282  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.772841  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.772898  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.773284  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.773451  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.775327  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.775699  127900 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:13.775713  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:13.775738  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.779093  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779539  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.779563  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779784  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.779957  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.780110  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.780255  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.787297  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.787663  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.787729  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.788010  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.789645  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.789826  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.790032  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.956110  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:13.956139  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:13.974813  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:14.024369  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:14.045961  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:14.045998  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:14.133161  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.133192  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:14.342486  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.827118  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.146649731s)
	I1212 23:17:14.827249  127900 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:14.827300  127900 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.118984074s)
	I1212 23:17:14.827324  127900 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:15.050916  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.076057269s)
	I1212 23:17:15.051030  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051049  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.051444  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.051497  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.051508  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.051517  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051527  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.053501  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.053573  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.053586  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.229413  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.229504  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.229934  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.231467  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.231489  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.522482  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.49806272s)
	I1212 23:17:15.522554  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.522574  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.522920  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.522971  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.522989  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.523009  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.523024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.523301  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.523322  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558083  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.21554598s)
	I1212 23:17:15.558173  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558200  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.558568  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.558591  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558603  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558613  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.559348  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.559370  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.559364  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.559387  127900 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-549640"
	I1212 23:17:15.562044  127900 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 23:17:11.796385  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796896  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796930  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:11.796831  129129 retry.go:31] will retry after 2.569081306s: waiting for machine to come up
	I1212 23:17:14.369090  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:14.369522  129129 retry.go:31] will retry after 3.566691604s: waiting for machine to come up
	I1212 23:17:15.563724  127900 addons.go:502] enable addons completed in 1.882971652s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 23:17:17.065214  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:15.574585  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.285870336s)
	I1212 23:17:15.574622  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 23:17:15.574667  128156 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:15.574736  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:17.937618  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938021  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938052  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:17.937984  129129 retry.go:31] will retry after 2.790781234s: waiting for machine to come up
	I1212 23:17:20.730659  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731151  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731179  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:20.731128  129129 retry.go:31] will retry after 5.345575973s: waiting for machine to come up
	I1212 23:17:19.564344  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:21.564330  127900 node_ready.go:49] node "old-k8s-version-549640" has status "Ready":"True"
	I1212 23:17:21.564356  127900 node_ready.go:38] duration metric: took 6.737022414s waiting for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:21.564367  127900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:21.569573  127900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:19.606668  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.031891087s)
	I1212 23:17:19.606701  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 23:17:19.606731  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:19.606791  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:21.765860  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.159035751s)
	I1212 23:17:21.765896  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 23:17:21.765934  128156 cache_images.go:123] Successfully loaded all cached images
	I1212 23:17:21.765944  128156 cache_images.go:92] LoadImages completed in 18.221602939s
	I1212 23:17:21.766033  128156 ssh_runner.go:195] Run: crio config
	I1212 23:17:21.818966  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:21.818996  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:21.819021  128156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:21.819048  128156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-115023 NodeName:no-preload-115023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:21.819220  128156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-115023"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:21.819310  128156 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-115023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:21.819369  128156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 23:17:21.829605  128156 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:21.829690  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:21.838518  128156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1212 23:17:21.854214  128156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 23:17:21.869927  128156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1212 23:17:21.886723  128156 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:21.890481  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:21.902964  128156 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023 for IP: 192.168.72.32
	I1212 23:17:21.902993  128156 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:21.903156  128156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:21.903194  128156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:21.903275  128156 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.key
	I1212 23:17:21.903357  128156 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key.9d394d40
	I1212 23:17:21.903393  128156 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key
	I1212 23:17:21.903509  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:21.903540  128156 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:21.903550  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:21.903583  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:21.903623  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:21.903647  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:21.903687  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:21.904310  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:21.928095  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:17:21.951412  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:21.974936  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:21.997877  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:22.020598  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:22.042859  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:22.065941  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:22.088688  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:22.110493  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:22.132736  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:22.154394  128156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:22.170427  128156 ssh_runner.go:195] Run: openssl version
	I1212 23:17:22.176106  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:22.186617  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191355  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191423  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.196989  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:22.208456  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:22.219355  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224154  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224224  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.230069  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:22.240929  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:22.251836  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256441  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256496  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.261952  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:22.272452  128156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:22.277105  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:22.283114  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:22.288860  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:22.294416  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:22.300148  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:22.306380  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:22.316419  128156 kubeadm.go:404] StartCluster: {Name:no-preload-115023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:22.316550  128156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:22.316623  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:22.358616  128156 cri.go:89] found id: ""
	I1212 23:17:22.358703  128156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:22.368800  128156 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:22.368823  128156 kubeadm.go:636] restartCluster start
	I1212 23:17:22.368883  128156 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:22.378570  128156 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.380161  128156 kubeconfig.go:92] found "no-preload-115023" server: "https://192.168.72.32:8443"
	I1212 23:17:22.383451  128156 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:22.392995  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.393064  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.405318  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.405337  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.405370  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.416721  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.917468  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.917571  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.929995  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.417616  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.417752  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.430907  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.917522  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.917607  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.929655  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:24.417316  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.417427  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.429590  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.436348  127760 start.go:369] acquired machines lock for "embed-certs-809120" in 1m2.018372087s
	I1212 23:17:27.436407  127760 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:27.436418  127760 fix.go:54] fixHost starting: 
	I1212 23:17:27.436818  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:27.436856  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:27.453079  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I1212 23:17:27.453449  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:27.453967  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:17:27.453999  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:27.454365  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:27.454580  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:27.454743  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:17:27.456367  127760 fix.go:102] recreateIfNeeded on embed-certs-809120: state=Stopped err=<nil>
	I1212 23:17:27.456395  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	W1212 23:17:27.456549  127760 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:27.458402  127760 out.go:177] * Restarting existing kvm2 VM for "embed-certs-809120" ...
	I1212 23:17:23.588762  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:26.087305  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:27.459818  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Start
	I1212 23:17:27.459994  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring networks are active...
	I1212 23:17:27.460587  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network default is active
	I1212 23:17:27.460997  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network mk-embed-certs-809120 is active
	I1212 23:17:27.461361  127760 main.go:141] libmachine: (embed-certs-809120) Getting domain xml...
	I1212 23:17:27.462026  127760 main.go:141] libmachine: (embed-certs-809120) Creating domain...
	I1212 23:17:26.081099  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Found IP for machine: 192.168.39.180
	I1212 23:17:26.081626  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has current primary IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081637  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserving static IP address...
	I1212 23:17:26.082029  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserved static IP address: 192.168.39.180
	I1212 23:17:26.082080  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.082119  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for SSH to be available...
	I1212 23:17:26.082157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | skip adding static IP to network mk-default-k8s-diff-port-850839 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"}
	I1212 23:17:26.082182  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Getting to WaitForSSH function...
	I1212 23:17:26.084444  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.084803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084864  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH client type: external
	I1212 23:17:26.084925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa (-rw-------)
	I1212 23:17:26.084971  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:26.084992  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | About to run SSH command:
	I1212 23:17:26.085006  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | exit 0
	I1212 23:17:26.175122  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:26.175455  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetConfigRaw
	I1212 23:17:26.176092  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.178747  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179016  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.179044  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179388  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:17:26.179602  128282 machine.go:88] provisioning docker machine ...
	I1212 23:17:26.179624  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:26.179853  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180033  128282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850839"
	I1212 23:17:26.180051  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180209  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.182470  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.182812  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.182848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.183003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.183193  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183374  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183538  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.183709  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.184100  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.184115  128282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850839 && echo "default-k8s-diff-port-850839" | sudo tee /etc/hostname
	I1212 23:17:26.313520  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850839
	
	I1212 23:17:26.313562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.316848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.317633  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317817  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.318047  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318229  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318344  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.318567  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.318888  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.318907  128282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850839/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:26.443174  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:26.443206  128282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:26.443224  128282 buildroot.go:174] setting up certificates
	I1212 23:17:26.443255  128282 provision.go:83] configureAuth start
	I1212 23:17:26.443273  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.443628  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.446155  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446467  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.446501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446568  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.449661  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450005  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.450041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450170  128282 provision.go:138] copyHostCerts
	I1212 23:17:26.450235  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:26.450258  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:26.450330  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:26.450442  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:26.450453  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:26.450483  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:26.450555  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:26.450565  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:26.450592  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:26.450656  128282 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850839 san=[192.168.39.180 192.168.39.180 localhost 127.0.0.1 minikube default-k8s-diff-port-850839]
	I1212 23:17:26.688969  128282 provision.go:172] copyRemoteCerts
	I1212 23:17:26.689035  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:26.689060  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.691731  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692004  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.692033  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692207  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.692441  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.692607  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.692736  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:26.781407  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:26.804712  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 23:17:26.827036  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:26.848977  128282 provision.go:86] duration metric: configureAuth took 405.706324ms
	I1212 23:17:26.849006  128282 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:26.849214  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:26.849310  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.851925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852281  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.852314  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852486  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.852679  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.852860  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.853003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.853172  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.853688  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.853711  128282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:27.183932  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:27.183961  128282 machine.go:91] provisioned docker machine in 1.004345653s
	I1212 23:17:27.183972  128282 start.go:300] post-start starting for "default-k8s-diff-port-850839" (driver="kvm2")
	I1212 23:17:27.183982  128282 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:27.183999  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.184348  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:27.184398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.187375  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187709  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.187759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187858  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.188054  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.188248  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.188400  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.277858  128282 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:27.282128  128282 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:27.282157  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:27.282244  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:27.282368  128282 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:27.282481  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:27.291755  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:27.313541  128282 start.go:303] post-start completed in 129.554425ms
	I1212 23:17:27.313563  128282 fix.go:56] fixHost completed within 25.388839079s
	I1212 23:17:27.313586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.316388  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316737  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.316760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316934  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.317141  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317343  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317540  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.317789  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:27.318143  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:27.318158  128282 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:27.436207  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423047.383892438
	
	I1212 23:17:27.436230  128282 fix.go:206] guest clock: 1702423047.383892438
	I1212 23:17:27.436237  128282 fix.go:219] Guest: 2023-12-12 23:17:27.383892438 +0000 UTC Remote: 2023-12-12 23:17:27.313567546 +0000 UTC m=+296.357388926 (delta=70.324892ms)
	I1212 23:17:27.436261  128282 fix.go:190] guest clock delta is within tolerance: 70.324892ms
	I1212 23:17:27.436266  128282 start.go:83] releasing machines lock for "default-k8s-diff-port-850839", held for 25.511577503s
	I1212 23:17:27.436289  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.436571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:27.439315  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439697  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.439730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440396  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440660  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440741  128282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:27.440793  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.440873  128282 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:27.440891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.443558  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443880  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443938  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.443965  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444132  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444338  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444369  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.444398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444741  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.444788  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444907  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.445073  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.528730  128282 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:27.563590  128282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:27.715220  128282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:27.722775  128282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:27.722883  128282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:27.743217  128282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:27.743264  128282 start.go:475] detecting cgroup driver to use...
	I1212 23:17:27.743344  128282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:27.759125  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:27.772532  128282 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:27.772602  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:27.786439  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:27.800413  128282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:27.905626  128282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:28.037279  128282 docker.go:219] disabling docker service ...
	I1212 23:17:28.037362  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:28.050670  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:28.063551  128282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:28.195512  128282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:28.306881  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:28.324506  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:28.344908  128282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:28.344992  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.354788  128282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:28.354883  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.364157  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.373415  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.383391  128282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:28.393203  128282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:28.401935  128282 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:28.402006  128282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:28.413618  128282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:28.426007  128282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:28.536725  128282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:28.711815  128282 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:28.711892  128282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:28.717242  128282 start.go:543] Will wait 60s for crictl version
	I1212 23:17:28.717306  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:17:28.724383  128282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:28.779687  128282 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:28.779781  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.834147  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.894131  128282 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:24.917347  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.917438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.928690  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.417259  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.417343  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.428544  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.917136  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.917212  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.927813  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.417826  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.417917  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.428147  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.917724  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.917803  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.929515  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.416997  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.417102  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.428180  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.917712  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.917830  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.931264  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.417370  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.417479  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.432478  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.916907  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.917039  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.932698  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:29.416883  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.416989  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.434138  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.895767  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:28.898899  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899233  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:28.899276  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899500  128282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:28.903950  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:28.917270  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:28.917383  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:28.956752  128282 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:28.956832  128282 ssh_runner.go:195] Run: which lz4
	I1212 23:17:28.961387  128282 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:28.965850  128282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:28.965925  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:30.869493  128282 crio.go:444] Took 1.908152 seconds to copy over tarball
	I1212 23:17:30.869580  128282 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:28.610279  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:31.088625  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:28.873664  127760 main.go:141] libmachine: (embed-certs-809120) Waiting to get IP...
	I1212 23:17:28.874489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:28.874895  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:28.874992  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:28.874848  129329 retry.go:31] will retry after 244.313261ms: waiting for machine to come up
	I1212 23:17:29.120442  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.120959  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.120997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.120852  129329 retry.go:31] will retry after 369.234988ms: waiting for machine to come up
	I1212 23:17:29.491516  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.492081  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.492124  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.492035  129329 retry.go:31] will retry after 448.746179ms: waiting for machine to come up
	I1212 23:17:29.942643  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.943286  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.943319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.943229  129329 retry.go:31] will retry after 520.98965ms: waiting for machine to come up
	I1212 23:17:30.465955  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:30.466468  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:30.466503  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:30.466430  129329 retry.go:31] will retry after 617.123622ms: waiting for machine to come up
	I1212 23:17:31.085159  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.085706  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.085746  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.085665  129329 retry.go:31] will retry after 853.539861ms: waiting for machine to come up
	I1212 23:17:31.940795  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.941240  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.941265  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.941169  129329 retry.go:31] will retry after 960.346145ms: waiting for machine to come up
	I1212 23:17:29.916897  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.917007  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.932055  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.417555  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.417657  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.433218  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.917841  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.917967  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.933255  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.417271  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.417357  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.429192  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.917804  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.917908  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.930333  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:32.393106  128156 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:32.393209  128156 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:32.393228  128156 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:32.393315  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:32.445688  128156 cri.go:89] found id: ""
	I1212 23:17:32.445774  128156 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:32.462269  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:32.473687  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:32.473768  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483043  128156 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483075  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:32.656758  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.442637  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.666131  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.751061  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.855861  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:33.855952  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:33.879438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.403317  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.178083  128282 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.308463726s)
	I1212 23:17:34.178124  128282 crio.go:451] Took 3.308601 seconds to extract the tarball
	I1212 23:17:34.178136  128282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:34.219740  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:34.268961  128282 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:34.268987  128282 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:34.269051  128282 ssh_runner.go:195] Run: crio config
	I1212 23:17:34.326979  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:34.327007  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:34.327033  128282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:34.327060  128282 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850839 NodeName:default-k8s-diff-port-850839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:34.327252  128282 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:34.327353  128282 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 23:17:34.327425  128282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:34.338300  128282 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:34.338385  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:34.347329  128282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 23:17:34.364120  128282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:34.380374  128282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 23:17:34.398219  128282 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:34.402134  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:34.415197  128282 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839 for IP: 192.168.39.180
	I1212 23:17:34.415252  128282 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:34.415436  128282 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:34.415472  128282 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:34.415540  128282 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.key
	I1212 23:17:34.415593  128282 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key.66237cde
	I1212 23:17:34.415626  128282 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key
	I1212 23:17:34.415739  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:34.415780  128282 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:34.415793  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:34.415841  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:34.415886  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:34.415931  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:34.415990  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:34.416632  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:34.440783  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:34.466303  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:34.491267  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:17:34.516659  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:34.542472  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:34.569367  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:34.599627  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:34.628781  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:34.655361  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:34.681199  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:34.706068  128282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:34.724142  128282 ssh_runner.go:195] Run: openssl version
	I1212 23:17:34.730108  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:34.740221  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745118  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745203  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.751091  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:34.761120  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:34.771456  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776480  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776559  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.782833  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:34.793597  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:34.804519  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809767  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809831  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.815838  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:34.825967  128282 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:34.831487  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:34.838280  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:34.845663  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:34.854810  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:34.862962  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:34.869641  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:34.876373  128282 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:34.876509  128282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:34.876579  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:34.918413  128282 cri.go:89] found id: ""
	I1212 23:17:34.918486  128282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:34.928267  128282 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:34.928305  128282 kubeadm.go:636] restartCluster start
	I1212 23:17:34.928396  128282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:34.938202  128282 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.939397  128282 kubeconfig.go:92] found "default-k8s-diff-port-850839" server: "https://192.168.39.180:8444"
	I1212 23:17:34.941945  128282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:34.953458  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.953552  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.965537  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.965561  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.965623  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.977454  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.478209  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.478292  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.505825  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.978537  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.978615  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.991422  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:33.591861  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:35.629760  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:32.902889  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:32.903556  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:32.903588  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:32.903500  129329 retry.go:31] will retry after 1.225619987s: waiting for machine to come up
	I1212 23:17:34.130560  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:34.131066  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:34.131098  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:34.131009  129329 retry.go:31] will retry after 1.544530633s: waiting for machine to come up
	I1212 23:17:35.677455  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:35.677916  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:35.677939  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:35.677902  129329 retry.go:31] will retry after 1.740004665s: waiting for machine to come up
	I1212 23:17:37.419743  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:37.420167  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:37.420203  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:37.420121  129329 retry.go:31] will retry after 2.220250897s: waiting for machine to come up
	I1212 23:17:34.902923  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.402835  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.903269  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.403728  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.903298  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.403775  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.903663  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.403892  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.429370  128156 api_server.go:72] duration metric: took 4.573508338s to wait for apiserver process to appear ...
	I1212 23:17:38.429402  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:38.429424  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.429952  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.430019  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.430455  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.931234  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:36.478240  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.478317  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.494437  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:36.978574  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.978654  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.995711  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.478404  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.478484  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.492356  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.977979  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.978123  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.993637  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.478102  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.478227  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.494347  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.977645  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.977771  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.994288  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.477795  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.477942  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.495986  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.978587  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.978695  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.994551  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.477958  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.478056  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.492956  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.978560  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.978663  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.994199  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.089524  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:40.591793  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:39.643094  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:39.643562  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:39.643603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:39.643508  129329 retry.go:31] will retry after 2.987735855s: waiting for machine to come up
	I1212 23:17:42.633477  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:42.633958  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:42.633993  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:42.633907  129329 retry.go:31] will retry after 3.131576961s: waiting for machine to come up
	I1212 23:17:41.334632  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:41.334685  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:41.334703  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.392719  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.392768  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.431413  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.445393  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.445428  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.930605  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.935880  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.935918  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.430551  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.435690  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:42.435720  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.931341  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.936295  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:17:42.944125  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:17:42.944163  128156 api_server.go:131] duration metric: took 4.514753942s to wait for apiserver health ...
	I1212 23:17:42.944173  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:42.944179  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:42.945951  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:42.947258  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:42.957745  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:42.978269  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:42.990231  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:42.990267  128156 system_pods.go:61] "coredns-76f75df574-2rdhr" [266c2440-a927-476c-b918-d0712834fc2f] Running
	I1212 23:17:42.990274  128156 system_pods.go:61] "etcd-no-preload-115023" [522ee237-12e0-4b83-9e20-05713cd87c7d] Running
	I1212 23:17:42.990281  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [9048886a-1b8b-407d-bd71-c5a850d88a5f] Running
	I1212 23:17:42.990287  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [4652e03f-2622-41d8-8791-bcc648d43432] Running
	I1212 23:17:42.990292  128156 system_pods.go:61] "kube-proxy-rqhmc" [b7514603-3389-4a38-b24a-e9c7948364bc] Running
	I1212 23:17:42.990299  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [7ce16391-9627-454b-b0de-27af47921997] Running
	I1212 23:17:42.990308  128156 system_pods.go:61] "metrics-server-57f55c9bc5-b42rv" [f27bd873-340b-4ae1-922a-ed8f52d558dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:42.990316  128156 system_pods.go:61] "storage-provisioner" [d9565f7f-dcf4-4e4d-88fd-e8a54bbf0e40] Running
	I1212 23:17:42.990327  128156 system_pods.go:74] duration metric: took 12.031472ms to wait for pod list to return data ...
	I1212 23:17:42.990347  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:42.994787  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:42.994817  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:42.994827  128156 node_conditions.go:105] duration metric: took 4.471497ms to run NodePressure ...
	I1212 23:17:42.994844  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.281299  128156 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:43.286299  128156 retry.go:31] will retry after 184.15509ms: kubelet not initialised
	I1212 23:17:43.476354  128156 retry.go:31] will retry after 533.806598ms: kubelet not initialised
	I1212 23:17:44.036349  128156 retry.go:31] will retry after 483.473669ms: kubelet not initialised
	I1212 23:17:41.477798  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.477898  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.493963  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:41.977991  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.978077  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.994590  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.478242  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.478334  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.495374  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.978495  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.978597  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.992337  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.477604  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.477667  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.491061  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.977638  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.977754  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.991654  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.478308  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:44.478409  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:44.494965  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.953708  128282 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:44.953763  128282 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:44.953780  128282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:44.953874  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:45.003440  128282 cri.go:89] found id: ""
	I1212 23:17:45.003519  128282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:45.021471  128282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:45.036134  128282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:45.036203  128282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049188  128282 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049214  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.197549  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.958707  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.088583  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.587947  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:47.588918  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.768814  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:45.769238  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:45.769270  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:45.769171  129329 retry.go:31] will retry after 3.722952815s: waiting for machine to come up
	I1212 23:17:44.529285  128156 kubeadm.go:787] kubelet initialised
	I1212 23:17:44.529310  128156 kubeadm.go:788] duration metric: took 1.247981757s waiting for restarted kubelet to initialise ...
	I1212 23:17:44.529321  128156 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:44.551751  128156 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:46.588427  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:48.589582  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:46.161702  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.251040  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.344286  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:46.344385  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.359646  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.875339  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.375793  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.875532  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.375394  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.875412  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.903144  128282 api_server.go:72] duration metric: took 2.558861066s to wait for apiserver process to appear ...
	I1212 23:17:48.903170  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:48.903188  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.903660  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:48.903697  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.904122  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:49.404880  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:50.088813  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.089208  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:49.494927  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495446  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has current primary IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495474  127760 main.go:141] libmachine: (embed-certs-809120) Found IP for machine: 192.168.50.221
	I1212 23:17:49.495489  127760 main.go:141] libmachine: (embed-certs-809120) Reserving static IP address...
	I1212 23:17:49.495884  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.495933  127760 main.go:141] libmachine: (embed-certs-809120) DBG | skip adding static IP to network mk-embed-certs-809120 - found existing host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"}
	I1212 23:17:49.495954  127760 main.go:141] libmachine: (embed-certs-809120) Reserved static IP address: 192.168.50.221
	I1212 23:17:49.495971  127760 main.go:141] libmachine: (embed-certs-809120) Waiting for SSH to be available...
	I1212 23:17:49.495987  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Getting to WaitForSSH function...
	I1212 23:17:49.498007  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498362  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.498398  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498514  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH client type: external
	I1212 23:17:49.498545  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa (-rw-------)
	I1212 23:17:49.498583  127760 main.go:141] libmachine: (embed-certs-809120) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:49.498598  127760 main.go:141] libmachine: (embed-certs-809120) DBG | About to run SSH command:
	I1212 23:17:49.498615  127760 main.go:141] libmachine: (embed-certs-809120) DBG | exit 0
	I1212 23:17:49.635655  127760 main.go:141] libmachine: (embed-certs-809120) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:49.636039  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetConfigRaw
	I1212 23:17:49.636795  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.639601  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640032  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.640059  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640367  127760 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/config.json ...
	I1212 23:17:49.640604  127760 machine.go:88] provisioning docker machine ...
	I1212 23:17:49.640629  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:49.640901  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641044  127760 buildroot.go:166] provisioning hostname "embed-certs-809120"
	I1212 23:17:49.641066  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641184  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.643599  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644050  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.644082  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644210  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.644439  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644612  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644791  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.644961  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.645333  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.645350  127760 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-809120 && echo "embed-certs-809120" | sudo tee /etc/hostname
	I1212 23:17:49.779263  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-809120
	
	I1212 23:17:49.779298  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.782329  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782739  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.782772  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782891  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.783133  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783306  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783466  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.783641  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.784029  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.784055  127760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-809120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-809120/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-809120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:49.914603  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:49.914641  127760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:49.914673  127760 buildroot.go:174] setting up certificates
	I1212 23:17:49.914686  127760 provision.go:83] configureAuth start
	I1212 23:17:49.914704  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.915021  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.918281  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918661  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.918715  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918849  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.921184  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921566  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.921603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921732  127760 provision.go:138] copyHostCerts
	I1212 23:17:49.921811  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:49.921824  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:49.921891  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:49.922013  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:49.922030  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:49.922061  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:49.922139  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:49.922149  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:49.922174  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:49.922255  127760 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-809120 san=[192.168.50.221 192.168.50.221 localhost 127.0.0.1 minikube embed-certs-809120]
	I1212 23:17:50.309293  127760 provision.go:172] copyRemoteCerts
	I1212 23:17:50.309361  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:50.309389  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.312319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.312745  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312942  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.313157  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.313362  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.313554  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.401075  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:50.426930  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 23:17:50.452785  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:50.480062  127760 provision.go:86] duration metric: configureAuth took 565.356144ms
	I1212 23:17:50.480098  127760 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:50.480377  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:50.480523  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.483621  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484035  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.484091  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484244  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.484455  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484603  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484728  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.484903  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.485264  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.485282  127760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:50.842779  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:50.842815  127760 machine.go:91] provisioned docker machine in 1.202192917s
	I1212 23:17:50.842831  127760 start.go:300] post-start starting for "embed-certs-809120" (driver="kvm2")
	I1212 23:17:50.842846  127760 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:50.842882  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:50.843282  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:50.843318  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.846267  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846670  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.846704  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846881  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.847102  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.847322  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.847496  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.934904  127760 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:50.939875  127760 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:50.939912  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:50.940000  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:50.940130  127760 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:50.940242  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:50.950764  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:50.977204  127760 start.go:303] post-start completed in 134.34972ms
	I1212 23:17:50.977232  127760 fix.go:56] fixHost completed within 23.540815255s
	I1212 23:17:50.977256  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.980553  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981029  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.981065  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981350  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.981611  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981766  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981917  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.982111  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.982448  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.982467  127760 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:51.096273  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423071.035304579
	
	I1212 23:17:51.096303  127760 fix.go:206] guest clock: 1702423071.035304579
	I1212 23:17:51.096311  127760 fix.go:219] Guest: 2023-12-12 23:17:51.035304579 +0000 UTC Remote: 2023-12-12 23:17:50.977236465 +0000 UTC m=+368.149225502 (delta=58.068114ms)
	I1212 23:17:51.096365  127760 fix.go:190] guest clock delta is within tolerance: 58.068114ms
	I1212 23:17:51.096375  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 23.659994787s
	I1212 23:17:51.096401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.096676  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:51.099275  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099683  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.099714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099864  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100586  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100671  127760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:51.100713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.100833  127760 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:51.100859  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.103808  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104103  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104214  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104268  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104379  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104415  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104405  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104615  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104620  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104817  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.104999  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.105058  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.105220  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.214734  127760 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:51.221556  127760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:51.379699  127760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:51.386319  127760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:51.386411  127760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:51.406594  127760 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:51.406623  127760 start.go:475] detecting cgroup driver to use...
	I1212 23:17:51.406707  127760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:51.421646  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:51.439574  127760 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:51.439651  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:51.456389  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:51.469380  127760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:51.576093  127760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:51.711468  127760 docker.go:219] disabling docker service ...
	I1212 23:17:51.711548  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:51.726747  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:51.739661  127760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:51.852974  127760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:51.973603  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:51.986983  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:52.004739  127760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:52.004809  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.017255  127760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:52.017345  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.029275  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.040398  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.051671  127760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:52.062036  127760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:52.070879  127760 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:52.070958  127760 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:52.087878  127760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:52.099487  127760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:52.246621  127760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:52.445182  127760 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:52.445259  127760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:52.450378  127760 start.go:543] Will wait 60s for crictl version
	I1212 23:17:52.450458  127760 ssh_runner.go:195] Run: which crictl
	I1212 23:17:52.454778  127760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:52.497569  127760 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:52.497679  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.562042  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.622289  127760 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:52.623892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:52.626997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627438  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:52.627474  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627731  127760 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:52.633387  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:52.647682  127760 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:52.647763  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:52.691061  127760 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:52.691138  127760 ssh_runner.go:195] Run: which lz4
	I1212 23:17:52.695575  127760 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:52.701228  127760 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:52.701265  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:53.042479  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.042516  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.042532  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.134475  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.134511  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.404943  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.413791  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.413829  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:53.904341  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.916515  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.916564  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:54.404229  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:54.414091  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:17:54.428577  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:17:54.428615  128282 api_server.go:131] duration metric: took 5.525437271s to wait for apiserver health ...
	I1212 23:17:54.428628  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:54.428638  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:54.430838  128282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:50.589742  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.593395  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:54.432405  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:54.450147  128282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:54.496866  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:54.519276  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:54.519327  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:17:54.519339  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:17:54.519354  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:17:54.519405  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:17:54.519418  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:17:54.519434  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:17:54.519447  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:54.519484  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:17:54.519498  128282 system_pods.go:74] duration metric: took 22.603103ms to wait for pod list to return data ...
	I1212 23:17:54.519512  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:54.526046  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:54.526083  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:54.526098  128282 node_conditions.go:105] duration metric: took 6.575834ms to run NodePressure ...
	I1212 23:17:54.526127  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:54.979886  128282 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991132  128282 kubeadm.go:787] kubelet initialised
	I1212 23:17:54.991169  128282 kubeadm.go:788] duration metric: took 11.248765ms waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991185  128282 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:54.999550  128282 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.008465  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008494  128282 pod_ready.go:81] duration metric: took 8.904589ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.008508  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008516  128282 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.020120  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020152  128282 pod_ready.go:81] duration metric: took 11.625987ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.020164  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020191  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.030018  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030056  128282 pod_ready.go:81] duration metric: took 9.856172ms waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.030074  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030083  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.039957  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.039997  128282 pod_ready.go:81] duration metric: took 9.902972ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.040015  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.040025  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.384922  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384964  128282 pod_ready.go:81] duration metric: took 344.925878ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.384979  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384988  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.791268  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791307  128282 pod_ready.go:81] duration metric: took 406.306307ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.791323  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791335  128282 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:56.186386  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186484  128282 pod_ready.go:81] duration metric: took 395.136012ms waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:56.186514  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186553  128282 pod_ready.go:38] duration metric: took 1.195355612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:56.186577  128282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:56.201434  128282 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:56.201462  128282 kubeadm.go:640] restartCluster took 21.273148264s
	I1212 23:17:56.201473  128282 kubeadm.go:406] StartCluster complete in 21.325115034s
	I1212 23:17:56.201496  128282 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.201592  128282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:56.204683  128282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.205095  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:56.205222  128282 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:56.205300  128282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205321  128282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205330  128282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205346  128282 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205361  128282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850839"
	W1212 23:17:56.205363  128282 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:56.205324  128282 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205445  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205360  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 23:17:56.205501  128282 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:56.205595  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205832  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205855  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205918  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205939  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205978  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.206077  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.215695  128282 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850839" context rescaled to 1 replicas
	I1212 23:17:56.215745  128282 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:56.219003  128282 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:56.221363  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.223684  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I1212 23:17:56.223901  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I1212 23:17:56.224018  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I1212 23:17:56.224530  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.224610  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225015  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225250  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.225260  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.225597  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.225990  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.226015  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.226308  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.226318  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.227368  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.227535  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.229799  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.229817  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.230427  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.232575  128282 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-850839"
	W1212 23:17:56.232593  128282 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:56.232623  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.233075  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233110  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.233880  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233930  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.245636  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1212 23:17:56.246119  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.246606  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.246623  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.246950  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.247098  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.248959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.251159  128282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:56.249918  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I1212 23:17:56.251294  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I1212 23:17:56.252768  128282 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.252783  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:56.252798  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.253647  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.253753  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.254219  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254233  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254340  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254347  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254690  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254749  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.255310  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.255335  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.256017  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256612  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.256639  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.257003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.257189  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.257402  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.258242  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.260097  128282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:54.115994  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:55.606824  127900 pod_ready.go:92] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.606858  127900 pod_ready.go:81] duration metric: took 34.03725266s waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.606872  127900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619163  127900 pod_ready.go:92] pod "etcd-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.619197  127900 pod_ready.go:81] duration metric: took 12.316097ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619212  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627282  127900 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.627313  127900 pod_ready.go:81] duration metric: took 8.08913ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627328  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634928  127900 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.634962  127900 pod_ready.go:81] duration metric: took 7.625067ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634978  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644531  127900 pod_ready.go:92] pod "kube-proxy-b6lz6" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.644558  127900 pod_ready.go:81] duration metric: took 9.571853ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644572  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985318  127900 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.985350  127900 pod_ready.go:81] duration metric: took 340.769789ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985366  127900 pod_ready.go:38] duration metric: took 34.420989087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:55.985382  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:55.985443  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:56.008913  127900 api_server.go:72] duration metric: took 42.305439195s to wait for apiserver process to appear ...
	I1212 23:17:56.009000  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:56.009030  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:56.017005  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:56.018170  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:56.018198  127900 api_server.go:131] duration metric: took 9.18267ms to wait for apiserver health ...
	I1212 23:17:56.018209  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:56.189360  127900 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:56.189394  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.189401  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.189408  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.189415  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.189421  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.189428  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.189437  127900 system_pods.go:61] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.189447  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.189462  127900 system_pods.go:74] duration metric: took 171.24435ms to wait for pod list to return data ...
	I1212 23:17:56.189477  127900 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:17:56.386180  127900 default_sa.go:45] found service account: "default"
	I1212 23:17:56.386211  127900 default_sa.go:55] duration metric: took 196.72345ms for default service account to be created ...
	I1212 23:17:56.386223  127900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:17:56.591313  127900 system_pods.go:86] 8 kube-system pods found
	I1212 23:17:56.591345  127900 system_pods.go:89] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.591354  127900 system_pods.go:89] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.591361  127900 system_pods.go:89] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.591369  127900 system_pods.go:89] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.591375  127900 system_pods.go:89] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.591382  127900 system_pods.go:89] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.591393  127900 system_pods.go:89] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.591401  127900 system_pods.go:89] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.591414  127900 system_pods.go:126] duration metric: took 205.183283ms to wait for k8s-apps to be running ...
	I1212 23:17:56.591429  127900 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:17:56.591482  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.611938  127900 system_svc.go:56] duration metric: took 20.493956ms WaitForService to wait for kubelet.
	I1212 23:17:56.611982  127900 kubeadm.go:581] duration metric: took 42.908516938s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:17:56.612014  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:56.785799  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:56.785841  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:56.785856  127900 node_conditions.go:105] duration metric: took 173.834506ms to run NodePressure ...
	I1212 23:17:56.785874  127900 start.go:228] waiting for startup goroutines ...
	I1212 23:17:56.785883  127900 start.go:233] waiting for cluster config update ...
	I1212 23:17:56.785898  127900 start.go:242] writing updated cluster config ...
	I1212 23:17:56.786402  127900 ssh_runner.go:195] Run: rm -f paused
	I1212 23:17:56.860461  127900 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 23:17:56.862646  127900 out.go:177] 
	W1212 23:17:56.864213  127900 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 23:17:56.865656  127900 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 23:17:56.867482  127900 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-549640" cluster and "default" namespace by default
	I1212 23:17:54.719978  127760 crio.go:444] Took 2.024442 seconds to copy over tarball
	I1212 23:17:54.720063  127760 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:56.261553  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:56.261577  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:56.261599  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.269093  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269478  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.269501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269778  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.269969  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.270192  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.270348  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.273173  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I1212 23:17:56.273551  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.274146  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.274170  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.274479  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.274657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.276224  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.276536  128282 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.276553  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:56.276572  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.279571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.279991  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.280030  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.280183  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.280395  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.280562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.280708  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.399444  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.447026  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:56.447058  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:56.453920  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.474280  128282 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:56.474316  128282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:17:56.509564  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:56.509598  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:56.575180  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:56.575217  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:56.641478  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:58.298873  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.89938362s)
	I1212 23:17:58.298942  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.298948  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.844991558s)
	I1212 23:17:58.298957  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.298986  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299063  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299326  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299356  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299367  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299387  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299439  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299448  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299463  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299479  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299489  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299673  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299690  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299850  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299879  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299899  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.308876  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.308905  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.309195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.309232  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.309241  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.418788  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.777244462s)
	I1212 23:17:58.418849  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.418866  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.419251  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.419285  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.419297  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.419308  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.420803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.420837  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.420857  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.420876  128282 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:58.591048  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:58.635345  128282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:17:54.595102  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:57.089235  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:58.815643  128282 addons.go:502] enable addons completed in 2.610454188s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:17:58.247448  127760 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.527350021s)
	I1212 23:17:58.247482  127760 crio.go:451] Took 3.527472 seconds to extract the tarball
	I1212 23:17:58.247500  127760 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:58.292239  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:58.347669  127760 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:58.347700  127760 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:58.347774  127760 ssh_runner.go:195] Run: crio config
	I1212 23:17:58.410577  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:17:58.410604  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:58.410627  127760 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:58.410658  127760 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-809120 NodeName:embed-certs-809120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:58.410874  127760 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-809120"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:58.410973  127760 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-809120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:58.411040  127760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:58.422571  127760 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:58.422655  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:58.432833  127760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:17:58.449996  127760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:58.468807  127760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 23:17:58.487568  127760 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:58.492547  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:58.505497  127760 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120 for IP: 192.168.50.221
	I1212 23:17:58.505548  127760 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:58.505759  127760 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:58.505820  127760 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:58.505891  127760 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/client.key
	I1212 23:17:58.585996  127760 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key.edab0817
	I1212 23:17:58.586114  127760 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key
	I1212 23:17:58.586288  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:58.586319  127760 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:58.586330  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:58.586356  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:58.586381  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:58.586418  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:58.586483  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:58.587254  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:58.615215  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:58.644237  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:58.670345  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:58.694986  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:58.719944  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:58.744701  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:58.768614  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:58.792922  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:58.815723  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:58.840192  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:58.864277  127760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:58.883069  127760 ssh_runner.go:195] Run: openssl version
	I1212 23:17:58.889642  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:58.901893  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906910  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906964  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.912769  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:58.924171  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:58.937368  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942604  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942681  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.948759  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:58.959757  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:58.971091  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976035  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976105  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.982246  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:58.994786  127760 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:58.999625  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:59.006233  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:59.012668  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:59.018959  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:59.025039  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:59.031628  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:59.037633  127760 kubeadm.go:404] StartCluster: {Name:embed-certs-809120 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:59.037779  127760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:59.037837  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:59.078977  127760 cri.go:89] found id: ""
	I1212 23:17:59.079065  127760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:59.090869  127760 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:59.090893  127760 kubeadm.go:636] restartCluster start
	I1212 23:17:59.090957  127760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:59.101950  127760 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.103088  127760 kubeconfig.go:92] found "embed-certs-809120" server: "https://192.168.50.221:8443"
	I1212 23:17:59.105562  127760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:59.115942  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.116006  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.128428  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.128452  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.128508  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.141075  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.641778  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.641854  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.654519  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.142171  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.142275  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.157160  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.641601  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.641719  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.654666  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.141184  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.141289  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.154899  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.641381  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.641501  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.654663  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.141186  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.141311  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.154140  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.642051  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.642157  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.655013  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.586733  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.588383  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:03.588956  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.092631  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:03.591508  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:04.090728  128282 node_ready.go:49] node "default-k8s-diff-port-850839" has status "Ready":"True"
	I1212 23:18:04.090757  128282 node_ready.go:38] duration metric: took 7.616412902s waiting for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:18:04.090775  128282 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:04.099347  128282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107155  128282 pod_ready.go:92] pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.107180  128282 pod_ready.go:81] duration metric: took 7.807715ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107192  128282 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113524  128282 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.113547  128282 pod_ready.go:81] duration metric: took 6.348789ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113557  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:03.141560  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.141654  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.156399  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:03.642066  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.642159  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.657347  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.141755  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.141837  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.158471  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.641645  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.641754  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.655061  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.141603  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.141699  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.154832  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.641246  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.641321  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.658753  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.141224  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.141299  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.156055  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.641506  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.641593  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.654083  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.141490  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.141570  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.154699  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.641257  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.641336  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.653935  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.590423  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.088212  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:06.134727  128282 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:07.136828  128282 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.136854  128282 pod_ready.go:81] duration metric: took 3.023290043s waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.136866  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151525  128282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.151554  128282 pod_ready.go:81] duration metric: took 14.680003ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151570  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293823  128282 pod_ready.go:92] pod "kube-proxy-wjrjj" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.293853  128282 pod_ready.go:81] duration metric: took 142.276185ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293864  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690262  128282 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.690291  128282 pod_ready.go:81] duration metric: took 396.420266ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690311  128282 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:10.001790  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.141984  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.142065  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.154365  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:08.641957  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.642070  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.654449  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:09.117052  127760 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:18:09.117093  127760 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:18:09.117131  127760 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:18:09.117195  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:18:09.165861  127760 cri.go:89] found id: ""
	I1212 23:18:09.165944  127760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:18:09.183729  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:18:09.194407  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:18:09.194487  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204575  127760 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204609  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:09.333758  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.380332  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04653446s)
	I1212 23:18:10.380364  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.603185  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.692919  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.776099  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:18:10.776189  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.795881  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.310083  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.809948  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.309977  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.810420  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.089789  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.589345  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:14.002715  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:13.310509  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:13.336361  127760 api_server.go:72] duration metric: took 2.560264825s to wait for apiserver process to appear ...
	I1212 23:18:13.336391  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:18:13.336411  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.319120  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.319159  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.319177  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.400337  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.400373  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.900625  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.906178  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:17.906233  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.401353  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.407217  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:18.407262  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.901435  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.913756  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:18:18.922517  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:18:18.922545  127760 api_server.go:131] duration metric: took 5.586147801s to wait for apiserver health ...
	I1212 23:18:18.922556  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:18:18.922563  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:18:18.924845  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:18:15.088538  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:17.587744  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:16.503957  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.002214  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:18.926570  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:18:18.976384  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:18:19.009915  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:18:19.035935  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:18:19.035986  127760 system_pods.go:61] "coredns-5dd5756b68-bz6cz" [4f53d6a6-c877-4f76-8aca-06ee891d9652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:18:19.035996  127760 system_pods.go:61] "etcd-embed-certs-809120" [260387de-7507-4962-b2fd-90cd6b39cae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:18:19.036005  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [94ded414-9813-4d0e-8de4-8ad5f6c16a33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:18:19.036017  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [c6574dde-8281-4dd2-bacd-c0412f1f592c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:18:19.036028  127760 system_pods.go:61] "kube-proxy-h7zgl" [87ca2a99-1da7-4a50-b4c7-f160cddf9ff3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:18:19.036042  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [fc6d3a5c-4056-47f8-9156-f5d370ba1de6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:18:19.036053  127760 system_pods.go:61] "metrics-server-57f55c9bc5-mxsd2" [d519663c-7921-4fc9-8d0f-ecf6d3cdbd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:18:19.036071  127760 system_pods.go:61] "storage-provisioner" [900e5cb9-7d27-4446-b15d-21f67fa3b629] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:18:19.036081  127760 system_pods.go:74] duration metric: took 26.13268ms to wait for pod list to return data ...
	I1212 23:18:19.036093  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:18:19.045885  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:18:19.045930  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:18:19.045945  127760 node_conditions.go:105] duration metric: took 9.842707ms to run NodePressure ...
	I1212 23:18:19.045969  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:19.587096  127760 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593698  127760 kubeadm.go:787] kubelet initialised
	I1212 23:18:19.593722  127760 kubeadm.go:788] duration metric: took 6.595854ms waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593730  127760 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:19.602567  127760 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:21.623798  127760 pod_ready.go:102] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.590788  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:22.089448  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:24.090497  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:21.501964  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.502814  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:26.000629  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.124864  127760 pod_ready.go:92] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:23.124888  127760 pod_ready.go:81] duration metric: took 3.52228673s waiting for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:23.124898  127760 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:25.143967  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.146069  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.645645  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.645671  127760 pod_ready.go:81] duration metric: took 4.520766787s waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.645686  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652369  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.652392  127760 pod_ready.go:81] duration metric: took 6.700076ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652402  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587478  128156 pod_ready.go:92] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.587505  128156 pod_ready.go:81] duration metric: took 40.035726456s waiting for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587518  128156 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.596994  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.597015  128156 pod_ready.go:81] duration metric: took 9.490538ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.597027  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601904  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.601930  128156 pod_ready.go:81] duration metric: took 4.894855ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601942  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608643  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.608662  128156 pod_ready.go:81] duration metric: took 6.712079ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608673  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614595  128156 pod_ready.go:92] pod "kube-proxy-rqhmc" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.614624  128156 pod_ready.go:81] duration metric: took 5.945157ms waiting for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614632  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985244  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.985272  128156 pod_ready.go:81] duration metric: took 370.631498ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985282  128156 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.293707  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.293859  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:28.500792  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:31.002513  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.676207  127760 pod_ready.go:102] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:32.172306  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.172339  127760 pod_ready.go:81] duration metric: took 4.519929269s waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.172355  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178133  127760 pod_ready.go:92] pod "kube-proxy-h7zgl" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.178154  127760 pod_ready.go:81] duration metric: took 5.793304ms waiting for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178163  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184283  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.184305  127760 pod_ready.go:81] duration metric: took 6.134863ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184319  127760 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:31.792415  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.793837  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.499687  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:35.500853  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:34.448290  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.948646  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.296844  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.793406  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:40.501951  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.949791  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.448832  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.294594  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.295134  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.000673  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.000747  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.452098  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.947475  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.793152  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.793282  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.003229  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.499682  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.949034  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:50.449118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.455176  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.793896  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.293413  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.293611  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:51.502870  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.000866  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.002047  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.948058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.950946  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.791908  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.792808  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.500328  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.000549  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:59.449089  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.948622  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:00.793090  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.294337  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.002131  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.500315  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.948920  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.949566  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.792376  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.793999  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:08.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.500002  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.950271  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.450074  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.292457  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.294375  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.503977  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:15.000631  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.948486  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.951220  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.448916  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.792888  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:16.793429  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.293010  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.000916  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.499770  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.449088  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.949856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.293433  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.792996  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.506787  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.507411  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:26.001279  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.950269  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.952818  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.793527  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.294892  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.499823  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.500142  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.448303  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.449512  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.793364  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.293202  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.001883  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.500561  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:32.948419  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:34.948716  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:36.949202  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.293744  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:37.294070  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:38.001116  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:40.001502  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.449215  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:41.948577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.793176  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.292783  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.501401  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:45.003364  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:43.950039  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.449043  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:44.792361  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.793184  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.294980  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:47.500147  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.501096  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:48.449912  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:50.950549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:51.794547  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.298465  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.000382  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.005736  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.950635  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:55.449330  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:57.449700  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.792615  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.499865  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:58.499980  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:00.500389  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.950151  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:02.447970  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:01.793306  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.793698  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.001300  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.499370  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:04.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:06.450549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.793804  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.793899  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.500520  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.000481  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:08.950058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:11.449345  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.293157  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.293642  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.500064  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.500937  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:13.949163  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:16.448489  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.793066  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.293467  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.293785  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.003921  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.501044  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:18.953218  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.449082  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.792447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.794479  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.999979  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:24.001269  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.001308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.948517  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:25.949879  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.292488  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.293405  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.499717  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.500472  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.448633  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.455346  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.293436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.296063  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:33.004484  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:35.500190  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.949307  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.949549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.447994  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.792727  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.292297  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.293185  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.501094  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:40.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.448914  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.449574  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.296498  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.794079  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:42.000667  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:44.500084  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.949370  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.448365  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.293571  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.795374  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.501287  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:49.000247  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.002102  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.449326  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:50.950049  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.295712  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.796436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.500278  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.500483  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:52.950509  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.448194  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:57.448444  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:56.293432  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.791909  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.000148  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.000718  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:59.448627  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:01.449178  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.793652  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.798916  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.501103  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:04.504053  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:03.948376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.949118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.293868  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.796468  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.000140  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:09.500040  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.949917  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.449692  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.296954  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.793159  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:11.500724  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:13.501811  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:16.000506  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.948932  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:14.951174  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.448985  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:15.294394  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.792822  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:18.501242  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.000679  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:19.449857  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.949137  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:20.293991  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:22.793476  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.501237  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.001069  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.950208  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.449036  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:25.294562  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:27.792099  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.500763  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.000635  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.947918  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:30.949180  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:29.793559  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.793709  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:34.292407  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:33.001948  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.002761  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:32.949352  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.448233  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.449470  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:36.292723  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:38.792944  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.501308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.001944  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:39.948613  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:41.953252  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.793938  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.796054  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.499956  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.504598  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.453963  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.952856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:45.292988  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:47.792829  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.999714  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.000749  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.000798  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.448592  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.461405  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.793084  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:52.293550  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.001475  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:55.499894  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.952376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.451000  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:54.793373  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.796557  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:59.293830  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:57.501136  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.000501  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:58.949246  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.949331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:01.792604  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.793283  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:02.501611  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.001210  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.449006  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.449356  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:06.291970  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:08.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.502381  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.690392  128282 pod_ready.go:81] duration metric: took 4m0.000056495s waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:07.690437  128282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:07.690447  128282 pod_ready.go:38] duration metric: took 4m3.599656754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:07.690468  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:22:07.690503  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:07.690560  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:07.752216  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:07.752249  128282 cri.go:89] found id: ""
	I1212 23:22:07.752258  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:07.752309  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.757000  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:07.757068  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:07.801367  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:07.801398  128282 cri.go:89] found id: ""
	I1212 23:22:07.801409  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:07.801470  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.806744  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:07.806804  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:07.850495  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:07.850530  128282 cri.go:89] found id: ""
	I1212 23:22:07.850538  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:07.850588  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.855144  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:07.855226  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:07.900092  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:07.900121  128282 cri.go:89] found id: ""
	I1212 23:22:07.900131  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:07.900199  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.904280  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:07.904357  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:07.945991  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:07.946019  128282 cri.go:89] found id: ""
	I1212 23:22:07.946034  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:07.946101  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.951095  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:07.951168  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:07.992586  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:07.992611  128282 cri.go:89] found id: ""
	I1212 23:22:07.992619  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:07.992667  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.996887  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:07.996945  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:08.038769  128282 cri.go:89] found id: ""
	I1212 23:22:08.038810  128282 logs.go:284] 0 containers: []
	W1212 23:22:08.038820  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:08.038829  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:08.038892  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:08.081167  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.081202  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.081209  128282 cri.go:89] found id: ""
	I1212 23:22:08.081225  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:08.081282  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.085740  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.089816  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:08.089836  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:08.137243  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:08.137274  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:08.180654  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:08.180686  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:08.240646  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:08.240684  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:08.289713  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:08.289753  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:08.440863  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:08.440902  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:08.505477  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:08.505516  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.561373  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:08.561411  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:08.626446  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:08.626482  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:08.681726  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:08.681769  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:08.703440  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:08.703468  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.739960  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:08.739998  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:09.213821  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:09.213867  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:07.949577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:09.950086  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.449579  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:10.793412  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.794447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:11.771447  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:22:11.787326  128282 api_server.go:72] duration metric: took 4m15.571529815s to wait for apiserver process to appear ...
	I1212 23:22:11.787355  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:22:11.787395  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:11.787459  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:11.841146  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:11.841178  128282 cri.go:89] found id: ""
	I1212 23:22:11.841199  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:11.841263  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.845844  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:11.845917  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:11.895757  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:11.895780  128282 cri.go:89] found id: ""
	I1212 23:22:11.895789  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:11.895846  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.900575  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:11.900641  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:11.941848  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:11.941872  128282 cri.go:89] found id: ""
	I1212 23:22:11.941882  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:11.941962  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.948119  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:11.948192  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:11.997102  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:11.997126  128282 cri.go:89] found id: ""
	I1212 23:22:11.997135  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:11.997189  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.002683  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:12.002750  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:12.042120  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:12.042144  128282 cri.go:89] found id: ""
	I1212 23:22:12.042159  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:12.042225  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.047068  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:12.047144  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:12.092055  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:12.092078  128282 cri.go:89] found id: ""
	I1212 23:22:12.092087  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:12.092137  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.097642  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:12.097713  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:12.137481  128282 cri.go:89] found id: ""
	I1212 23:22:12.137521  128282 logs.go:284] 0 containers: []
	W1212 23:22:12.137532  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:12.137542  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:12.137607  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:12.183712  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:12.183735  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.183740  128282 cri.go:89] found id: ""
	I1212 23:22:12.183747  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:12.183813  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.188656  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.193613  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:12.193639  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:12.206911  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:12.206941  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:12.258294  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:12.258335  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.300901  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:12.300934  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:12.765702  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:12.765746  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:12.909101  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:12.909138  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:12.967049  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:12.967083  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:13.010895  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:13.010930  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:13.062291  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:13.062324  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:13.107276  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:13.107320  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:13.166395  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:13.166448  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:13.212812  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:13.212853  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:13.260977  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:13.261022  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:15.816287  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:22:15.821554  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:22:15.822925  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:22:15.822945  128282 api_server.go:131] duration metric: took 4.035583432s to wait for apiserver health ...
	I1212 23:22:15.822954  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:22:15.822976  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:15.823024  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:15.870940  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:15.870981  128282 cri.go:89] found id: ""
	I1212 23:22:15.870993  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:15.871062  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.876167  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:15.876244  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:15.916642  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:15.916671  128282 cri.go:89] found id: ""
	I1212 23:22:15.916682  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:15.916747  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.921173  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:15.921238  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:15.963421  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:15.963449  128282 cri.go:89] found id: ""
	I1212 23:22:15.963461  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:15.963521  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.967747  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:15.967821  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:14.949925  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.949999  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:15.294181  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:17.793324  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.011046  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.011071  128282 cri.go:89] found id: ""
	I1212 23:22:16.011079  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:16.011128  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.015592  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:16.015659  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:16.058065  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:16.058092  128282 cri.go:89] found id: ""
	I1212 23:22:16.058103  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:16.058157  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.062334  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:16.062398  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:16.105032  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:16.105062  128282 cri.go:89] found id: ""
	I1212 23:22:16.105074  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:16.105140  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.109674  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:16.109728  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:16.151188  128282 cri.go:89] found id: ""
	I1212 23:22:16.151221  128282 logs.go:284] 0 containers: []
	W1212 23:22:16.151230  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:16.151246  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:16.151314  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:16.196149  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:16.196191  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.196199  128282 cri.go:89] found id: ""
	I1212 23:22:16.196209  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:16.196272  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.201690  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.205939  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:16.205970  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:16.358186  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:16.358236  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:16.404737  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:16.404780  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.449040  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:16.449069  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.491141  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:16.491173  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:16.860522  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:16.860578  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:16.877982  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:16.878030  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:16.923301  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:16.923338  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:16.965351  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:16.965382  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:17.024559  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:17.024603  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:17.079193  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:17.079229  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:17.123956  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:17.124003  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:17.202000  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:17.202043  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:19.755866  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:22:19.755901  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.755907  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.755914  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.755922  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.755929  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.755936  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.755946  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.755954  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.755963  128282 system_pods.go:74] duration metric: took 3.933003633s to wait for pod list to return data ...
	I1212 23:22:19.755977  128282 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:22:19.758618  128282 default_sa.go:45] found service account: "default"
	I1212 23:22:19.758639  128282 default_sa.go:55] duration metric: took 2.655294ms for default service account to be created ...
	I1212 23:22:19.758647  128282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:22:19.764376  128282 system_pods.go:86] 8 kube-system pods found
	I1212 23:22:19.764398  128282 system_pods.go:89] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.764404  128282 system_pods.go:89] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.764409  128282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.764414  128282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.764418  128282 system_pods.go:89] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.764432  128282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.764444  128282 system_pods.go:89] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.764454  128282 system_pods.go:89] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.764464  128282 system_pods.go:126] duration metric: took 5.811076ms to wait for k8s-apps to be running ...
	I1212 23:22:19.764475  128282 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:22:19.764531  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:19.781048  128282 system_svc.go:56] duration metric: took 16.561836ms WaitForService to wait for kubelet.
	I1212 23:22:19.781100  128282 kubeadm.go:581] duration metric: took 4m23.565309829s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:22:19.781129  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:22:19.784205  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:22:19.784229  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:22:19.784240  128282 node_conditions.go:105] duration metric: took 3.105926ms to run NodePressure ...
	I1212 23:22:19.784253  128282 start.go:228] waiting for startup goroutines ...
	I1212 23:22:19.784259  128282 start.go:233] waiting for cluster config update ...
	I1212 23:22:19.784269  128282 start.go:242] writing updated cluster config ...
	I1212 23:22:19.784545  128282 ssh_runner.go:195] Run: rm -f paused
	I1212 23:22:19.840938  128282 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:22:19.842885  128282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850839" cluster and "default" namespace by default
	I1212 23:22:19.449331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:21.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:20.294156  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:22.792746  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:23.949834  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:26.452555  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.793601  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.985518  128156 pod_ready.go:81] duration metric: took 4m0.000203674s waiting for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:24.985551  128156 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:24.985571  128156 pod_ready.go:38] duration metric: took 4m40.456239368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:24.985600  128156 kubeadm.go:640] restartCluster took 5m2.616770336s
	W1212 23:22:24.985660  128156 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:24.985690  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:28.949293  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:31.449689  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:32.184476  127760 pod_ready.go:81] duration metric: took 4m0.000136331s waiting for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:32.184516  127760 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:32.184559  127760 pod_ready.go:38] duration metric: took 4m12.59080567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:32.184598  127760 kubeadm.go:640] restartCluster took 4m33.093698567s
	W1212 23:22:32.184674  127760 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:32.184715  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:39.117782  128156 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.132057077s)
	I1212 23:22:39.117868  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:39.132912  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:39.143453  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:39.153628  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:39.153684  128156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:39.374201  128156 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:46.310264  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.12551082s)
	I1212 23:22:46.310350  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:46.327577  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:46.339177  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:46.350355  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:46.350407  127760 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:46.414859  127760 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:22:46.414971  127760 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:46.599881  127760 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:46.600039  127760 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:46.600208  127760 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:46.867542  127760 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:46.869398  127760 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:46.869528  127760 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:46.869659  127760 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:46.869770  127760 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:46.869933  127760 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:46.870496  127760 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:46.871021  127760 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:46.871802  127760 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:46.873187  127760 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:46.874737  127760 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:46.876316  127760 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:46.877713  127760 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:46.877769  127760 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:47.211156  127760 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:47.370652  127760 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:47.491927  127760 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:47.746007  127760 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:47.746996  127760 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:47.749868  127760 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:47.751553  127760 out.go:204]   - Booting up control plane ...
	I1212 23:22:47.751724  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:47.751814  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:47.752662  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:47.770296  127760 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:47.770438  127760 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:47.770546  127760 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.362262  128156 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 23:22:51.362341  128156 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:51.362461  128156 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:51.362593  128156 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:51.362706  128156 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:51.362781  128156 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:51.364439  128156 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:51.364561  128156 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:51.364660  128156 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:51.364758  128156 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:51.364840  128156 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:51.364971  128156 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:51.365060  128156 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:51.365137  128156 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:51.365215  128156 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:51.365320  128156 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:51.365425  128156 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:51.365479  128156 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:51.365553  128156 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:51.365626  128156 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:51.365706  128156 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 23:22:51.365778  128156 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:51.365859  128156 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:51.365936  128156 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:51.366046  128156 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:51.366131  128156 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:51.368190  128156 out.go:204]   - Booting up control plane ...
	I1212 23:22:51.368316  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:51.368421  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:51.368517  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:51.368649  128156 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:51.368763  128156 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:51.368813  128156 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.369013  128156 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.369107  128156 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503652 seconds
	I1212 23:22:51.369231  128156 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:51.369390  128156 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:51.369465  128156 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:51.369709  128156 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-115023 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:51.369780  128156 kubeadm.go:322] [bootstrap-token] Using token: agyzoj.wkr94b17dt19k7yx
	I1212 23:22:51.371110  128156 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:51.371306  128156 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:51.371421  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:51.371643  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:51.371825  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:51.371975  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:51.372085  128156 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:51.372226  128156 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:51.372285  128156 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:51.372344  128156 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:51.372353  128156 kubeadm.go:322] 
	I1212 23:22:51.372425  128156 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:51.372437  128156 kubeadm.go:322] 
	I1212 23:22:51.372529  128156 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:51.372540  128156 kubeadm.go:322] 
	I1212 23:22:51.372571  128156 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:51.372645  128156 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:51.372711  128156 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:51.372720  128156 kubeadm.go:322] 
	I1212 23:22:51.372793  128156 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:51.372804  128156 kubeadm.go:322] 
	I1212 23:22:51.372861  128156 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:51.372871  128156 kubeadm.go:322] 
	I1212 23:22:51.372933  128156 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:51.373050  128156 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:51.373137  128156 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:51.373149  128156 kubeadm.go:322] 
	I1212 23:22:51.373248  128156 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:51.373345  128156 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:51.373356  128156 kubeadm.go:322] 
	I1212 23:22:51.373456  128156 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373583  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:51.373613  128156 kubeadm.go:322] 	--control-plane 
	I1212 23:22:51.373623  128156 kubeadm.go:322] 
	I1212 23:22:51.373724  128156 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:51.373739  128156 kubeadm.go:322] 
	I1212 23:22:51.373842  128156 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373985  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:51.374006  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:22:51.374015  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:51.375563  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:47.945457  127760 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.376861  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:51.414215  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:51.484549  128156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:51.484635  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.484696  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=no-preload-115023 minikube.k8s.io/updated_at=2023_12_12T23_22_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.564599  128156 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:51.924093  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.026923  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.628483  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.128275  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.628006  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:54.127897  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.450625  127760 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504757 seconds
	I1212 23:22:56.450779  127760 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:56.468441  127760 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:57.003074  127760 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:57.003292  127760 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-809120 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:57.518097  127760 kubeadm.go:322] [bootstrap-token] Using token: ichlu8.wzw1wbhrbc06xbtw
	I1212 23:22:57.519536  127760 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:57.519639  127760 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:57.528652  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:57.538325  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:57.542226  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:57.551395  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:57.556988  127760 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:57.573462  127760 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:57.833933  127760 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:57.949764  127760 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:57.949788  127760 kubeadm.go:322] 
	I1212 23:22:57.949888  127760 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:57.949913  127760 kubeadm.go:322] 
	I1212 23:22:57.950013  127760 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:57.950036  127760 kubeadm.go:322] 
	I1212 23:22:57.950079  127760 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:57.950155  127760 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:57.950228  127760 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:57.950240  127760 kubeadm.go:322] 
	I1212 23:22:57.950301  127760 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:57.950311  127760 kubeadm.go:322] 
	I1212 23:22:57.950375  127760 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:57.950385  127760 kubeadm.go:322] 
	I1212 23:22:57.950468  127760 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:57.950578  127760 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:57.950678  127760 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:57.950702  127760 kubeadm.go:322] 
	I1212 23:22:57.950818  127760 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:57.950916  127760 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:57.950926  127760 kubeadm.go:322] 
	I1212 23:22:57.951054  127760 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951199  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:57.951231  127760 kubeadm.go:322] 	--control-plane 
	I1212 23:22:57.951266  127760 kubeadm.go:322] 
	I1212 23:22:57.951386  127760 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:57.951396  127760 kubeadm.go:322] 
	I1212 23:22:57.951494  127760 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951619  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:57.952303  127760 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:57.952326  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:22:57.952337  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:57.954692  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:54.628965  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.127922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.627980  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.128047  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.628471  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.128456  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.628284  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.128528  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.628480  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.128296  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.955898  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:57.975567  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:58.044612  127760 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:58.044741  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.044746  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=embed-certs-809120 minikube.k8s.io/updated_at=2023_12_12T23_22_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.158788  127760 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:58.375305  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.487117  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.075465  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.575132  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.075781  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.575754  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.075376  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.575524  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.075163  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.574821  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.628475  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.128509  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.628837  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.128959  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.627976  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.128077  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.628493  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.128203  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.628549  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.127987  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.627922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.756882  128156 kubeadm.go:1088] duration metric: took 13.272316322s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:04.756928  128156 kubeadm.go:406] StartCluster complete in 5m42.440524658s
	I1212 23:23:04.756955  128156 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.757069  128156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:04.759734  128156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.760081  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:04.760220  128156 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:04.760311  128156 addons.go:69] Setting storage-provisioner=true in profile "no-preload-115023"
	I1212 23:23:04.760325  128156 addons.go:69] Setting default-storageclass=true in profile "no-preload-115023"
	I1212 23:23:04.760358  128156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-115023"
	I1212 23:23:04.760385  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:23:04.760332  128156 addons.go:231] Setting addon storage-provisioner=true in "no-preload-115023"
	W1212 23:23:04.760426  128156 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:04.760497  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760337  128156 addons.go:69] Setting metrics-server=true in profile "no-preload-115023"
	I1212 23:23:04.760525  128156 addons.go:231] Setting addon metrics-server=true in "no-preload-115023"
	W1212 23:23:04.760538  128156 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:04.760577  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760759  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760787  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.760953  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760986  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760995  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.761010  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.777848  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I1212 23:23:04.778063  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1212 23:23:04.778315  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778479  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778613  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38509
	I1212 23:23:04.778931  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778945  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778952  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.778957  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779020  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.779302  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779561  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.779726  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.779749  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779929  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.779961  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.780516  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.781173  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.781207  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.783399  128156 addons.go:231] Setting addon default-storageclass=true in "no-preload-115023"
	W1212 23:23:04.783422  128156 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:04.783452  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.783871  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.783906  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.797493  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 23:23:04.797741  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I1212 23:23:04.798102  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798132  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798613  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798630  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.798956  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798985  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.799262  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799438  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.799639  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.801934  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.802007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.803861  128156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:04.802341  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I1212 23:23:04.806911  128156 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:04.805759  128156 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:04.806058  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.808825  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:04.808833  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:04.808848  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:04.808856  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.808863  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.809266  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.809281  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.809624  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.810352  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.810381  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.813139  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813629  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.813654  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813828  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813882  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814303  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.814333  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.814148  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814542  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814625  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.814797  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814855  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.814954  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.815127  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.823127  128156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-115023" context rescaled to 1 replicas
	I1212 23:23:04.823174  128156 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:04.824991  128156 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:04.826596  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:04.827821  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I1212 23:23:04.828256  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.828820  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.828845  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.829390  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.829741  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.834167  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.834521  128156 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:04.834539  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:04.834563  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.838055  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838555  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.838587  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838772  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.838964  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.839119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.839284  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.972964  128156 node_ready.go:35] waiting up to 6m0s for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.973014  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:04.998182  128156 node_ready.go:49] node "no-preload-115023" has status "Ready":"True"
	I1212 23:23:04.998214  128156 node_ready.go:38] duration metric: took 25.214785ms waiting for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.998226  128156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:05.012036  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:05.027954  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:05.027977  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:05.063451  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:05.076403  128156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:05.119924  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:05.119957  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:05.216413  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.216443  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:05.285434  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.817542  128156 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:06.316381  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.252894593s)
	I1212 23:23:06.316378  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.304291472s)
	I1212 23:23:06.316446  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316460  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316491  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316509  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316903  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.316959  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.316966  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.316986  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316916  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317010  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.317022  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316995  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317032  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317327  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.317387  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317408  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.318858  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.318881  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.366104  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.366135  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.366427  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.366481  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.366492  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618093  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332604197s)
	I1212 23:23:06.618161  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618183  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618643  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.618665  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618676  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618684  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618845  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620326  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620340  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.620363  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.620384  128156 addons.go:467] Verifying addon metrics-server=true in "no-preload-115023"
	I1212 23:23:06.622226  128156 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:03.075069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.575772  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.074921  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.575481  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.075785  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.575855  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.075276  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.575017  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.075100  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.575342  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.623716  128156 addons.go:502] enable addons completed in 1.863496659s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:07.165490  128156 pod_ready.go:102] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:08.161341  128156 pod_ready.go:92] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.161380  128156 pod_ready.go:81] duration metric: took 3.084948492s waiting for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.161395  128156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169259  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.169294  128156 pod_ready.go:81] duration metric: took 7.890109ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169309  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176068  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.176097  128156 pod_ready.go:81] duration metric: took 6.779109ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176111  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183056  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.183085  128156 pod_ready.go:81] duration metric: took 6.964809ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183099  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066100  128156 pod_ready.go:92] pod "kube-proxy-qs95k" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.066123  128156 pod_ready.go:81] duration metric: took 883.017234ms waiting for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066132  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357841  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.357874  128156 pod_ready.go:81] duration metric: took 291.734639ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357884  128156 pod_ready.go:38] duration metric: took 4.359648281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:09.357904  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:09.357970  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:09.372791  128156 api_server.go:72] duration metric: took 4.549577037s to wait for apiserver process to appear ...
	I1212 23:23:09.372820  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:09.372841  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:23:09.378375  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:23:09.379855  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:23:09.379882  128156 api_server.go:131] duration metric: took 7.054126ms to wait for apiserver health ...
	I1212 23:23:09.379893  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:09.561188  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:09.561216  128156 system_pods.go:61] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.561221  128156 system_pods.go:61] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.561225  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.561229  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.561235  128156 system_pods.go:61] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.561239  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.561245  128156 system_pods.go:61] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.561249  128156 system_pods.go:61] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.561257  128156 system_pods.go:74] duration metric: took 181.358443ms to wait for pod list to return data ...
	I1212 23:23:09.561265  128156 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:09.756864  128156 default_sa.go:45] found service account: "default"
	I1212 23:23:09.756894  128156 default_sa.go:55] duration metric: took 195.622122ms for default service account to be created ...
	I1212 23:23:09.756905  128156 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:09.960670  128156 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:09.960700  128156 system_pods.go:89] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.960705  128156 system_pods.go:89] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.960710  128156 system_pods.go:89] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.960715  128156 system_pods.go:89] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.960719  128156 system_pods.go:89] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.960723  128156 system_pods.go:89] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.960729  128156 system_pods.go:89] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.960735  128156 system_pods.go:89] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.960744  128156 system_pods.go:126] duration metric: took 203.831934ms to wait for k8s-apps to be running ...
	I1212 23:23:09.960754  128156 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:09.960805  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:09.974511  128156 system_svc.go:56] duration metric: took 13.742619ms WaitForService to wait for kubelet.
	I1212 23:23:09.974543  128156 kubeadm.go:581] duration metric: took 5.15133848s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:09.974571  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:10.158679  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:10.158708  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:10.158717  128156 node_conditions.go:105] duration metric: took 184.140544ms to run NodePressure ...
	I1212 23:23:10.158730  128156 start.go:228] waiting for startup goroutines ...
	I1212 23:23:10.158736  128156 start.go:233] waiting for cluster config update ...
	I1212 23:23:10.158746  128156 start.go:242] writing updated cluster config ...
	I1212 23:23:10.158996  128156 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:10.222646  128156 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 23:23:10.224867  128156 out.go:177] * Done! kubectl is now configured to use "no-preload-115023" cluster and "default" namespace by default
	I1212 23:23:08.075026  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:08.574992  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.075693  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.575069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.075713  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.575464  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.075090  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.250257  127760 kubeadm.go:1088] duration metric: took 13.205579442s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:11.250290  127760 kubeadm.go:406] StartCluster complete in 5m12.212668558s
	I1212 23:23:11.250312  127760 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.250409  127760 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:11.253977  127760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.254241  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:11.254250  127760 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:11.254337  127760 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-809120"
	I1212 23:23:11.254351  127760 addons.go:69] Setting default-storageclass=true in profile "embed-certs-809120"
	I1212 23:23:11.254358  127760 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-809120"
	W1212 23:23:11.254366  127760 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:11.254369  127760 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-809120"
	I1212 23:23:11.254422  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254431  127760 addons.go:69] Setting metrics-server=true in profile "embed-certs-809120"
	I1212 23:23:11.254457  127760 addons.go:231] Setting addon metrics-server=true in "embed-certs-809120"
	W1212 23:23:11.254466  127760 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:11.254466  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:23:11.254510  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254798  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254802  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254845  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.254902  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254933  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.255058  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.272689  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I1212 23:23:11.272926  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I1212 23:23:11.273095  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273297  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273444  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I1212 23:23:11.273710  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273722  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.273784  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273935  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273947  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274917  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.274942  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.275403  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.275452  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.275615  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.275776  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.276164  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.276199  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.279953  127760 addons.go:231] Setting addon default-storageclass=true in "embed-certs-809120"
	W1212 23:23:11.279984  127760 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:11.280016  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.280439  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.280488  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.296262  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I1212 23:23:11.296273  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I1212 23:23:11.296731  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.296839  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.297284  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297296  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297304  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297315  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297662  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297722  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297820  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.297867  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1212 23:23:11.297876  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.298202  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.298805  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.298823  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.299106  127760 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-809120" context rescaled to 1 replicas
	I1212 23:23:11.299151  127760 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:11.300876  127760 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:11.299808  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.299838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.299990  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.302374  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:11.303907  127760 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:11.305369  127760 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:11.302872  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.307972  127760 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.307992  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:11.308012  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306693  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:11.308064  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:11.308088  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306729  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.312550  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312826  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.312853  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313337  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.313477  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.313493  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313524  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.313558  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313610  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.313772  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313988  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.314165  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.314287  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.334457  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1212 23:23:11.335025  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.335687  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.335719  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.336130  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.336356  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.338062  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.338356  127760 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.338380  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:11.338407  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.341489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342079  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.342119  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342283  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.342499  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.342642  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.342823  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.562179  127760 node_ready.go:35] waiting up to 6m0s for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.562383  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:11.573888  127760 node_ready.go:49] node "embed-certs-809120" has status "Ready":"True"
	I1212 23:23:11.573909  127760 node_ready.go:38] duration metric: took 11.694074ms waiting for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.573919  127760 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:11.591310  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:11.634553  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.672164  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.681199  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:11.681232  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:11.910291  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:11.910325  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:11.993110  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:11.993135  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:12.043047  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:13.550517  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.988091372s)
	I1212 23:23:13.550558  127760 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:13.642966  127760 pod_ready.go:102] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:14.387226  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.752630931s)
	I1212 23:23:14.387298  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387315  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387321  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.715126034s)
	I1212 23:23:14.387345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387359  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387641  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387663  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387675  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387690  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387776  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387801  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387811  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387819  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.388233  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388247  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388248  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.388285  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388291  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388345  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.426683  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.426713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.427017  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.427030  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.427038  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.477873  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.434777303s)
	I1212 23:23:14.477930  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.477944  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478303  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478321  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.478333  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.478357  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478607  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478622  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478632  127760 addons.go:467] Verifying addon metrics-server=true in "embed-certs-809120"
	I1212 23:23:14.480500  127760 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:14.481900  127760 addons.go:502] enable addons completed in 3.227656537s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:15.629572  127760 pod_ready.go:92] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.629599  127760 pod_ready.go:81] duration metric: took 4.038262674s waiting for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.629608  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.638502  127760 pod_ready.go:97] error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638532  127760 pod_ready.go:81] duration metric: took 8.918039ms waiting for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	E1212 23:23:15.638547  127760 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638556  127760 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647047  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.647075  127760 pod_ready.go:81] duration metric: took 8.510672ms waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647089  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655068  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.655091  127760 pod_ready.go:81] duration metric: took 7.994932ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655100  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664338  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.664386  127760 pod_ready.go:81] duration metric: took 9.26869ms waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664401  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732454  127760 pod_ready.go:92] pod "kube-proxy-4nb6w" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:16.732480  127760 pod_ready.go:81] duration metric: took 1.068071012s waiting for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732489  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022376  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:17.022402  127760 pod_ready.go:81] duration metric: took 289.906446ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022423  127760 pod_ready.go:38] duration metric: took 5.448491831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:17.022445  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:17.022494  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:17.039594  127760 api_server.go:72] duration metric: took 5.740406855s to wait for apiserver process to appear ...
	I1212 23:23:17.039620  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:17.039637  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:23:17.044745  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:23:17.046494  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:23:17.046521  127760 api_server.go:131] duration metric: took 6.894306ms to wait for apiserver health ...
	I1212 23:23:17.046531  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:17.227869  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:17.227899  127760 system_pods.go:61] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.227904  127760 system_pods.go:61] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.227909  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.227913  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.227916  127760 system_pods.go:61] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.227920  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.227927  127760 system_pods.go:61] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.227933  127760 system_pods.go:61] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.227944  127760 system_pods.go:74] duration metric: took 181.405975ms to wait for pod list to return data ...
	I1212 23:23:17.227962  127760 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:17.423151  127760 default_sa.go:45] found service account: "default"
	I1212 23:23:17.423181  127760 default_sa.go:55] duration metric: took 195.20215ms for default service account to be created ...
	I1212 23:23:17.423190  127760 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:17.627077  127760 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:17.627104  127760 system_pods.go:89] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.627109  127760 system_pods.go:89] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.627114  127760 system_pods.go:89] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.627118  127760 system_pods.go:89] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.627124  127760 system_pods.go:89] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.627128  127760 system_pods.go:89] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.627135  127760 system_pods.go:89] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.627139  127760 system_pods.go:89] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.627147  127760 system_pods.go:126] duration metric: took 203.952951ms to wait for k8s-apps to be running ...
	I1212 23:23:17.627155  127760 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:17.627197  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:17.641949  127760 system_svc.go:56] duration metric: took 14.784378ms WaitForService to wait for kubelet.
	I1212 23:23:17.641979  127760 kubeadm.go:581] duration metric: took 6.342797652s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:17.642005  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:17.823169  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:17.823201  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:17.823214  127760 node_conditions.go:105] duration metric: took 181.202017ms to run NodePressure ...
	I1212 23:23:17.823230  127760 start.go:228] waiting for startup goroutines ...
	I1212 23:23:17.823258  127760 start.go:233] waiting for cluster config update ...
	I1212 23:23:17.823276  127760 start.go:242] writing updated cluster config ...
	I1212 23:23:17.823609  127760 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:17.879192  127760 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:23:17.880946  127760 out.go:177] * Done! kubectl is now configured to use "embed-certs-809120" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:17:40 UTC, ends at Tue 2023-12-12 23:32:19 UTC. --
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.529896823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423939529879384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d4969ec4-e363-4916-9f3b-f62cf9a5ab71 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.530413083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a4e721c-2018-4734-b433-519d461f50a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.530560392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a4e721c-2018-4734-b433-519d461f50a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.530720917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed,PodSandboxId:129efa3c9d2cf04a810cb065d63a0cf271af463484b135d013d2332f8cea6d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423395644393401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a660d9e-2a10-49de-bb1d-fd237aa3345e,},Annotations:map[string]string{io.kubernetes.container.hash: c8c28f82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30,PodSandboxId:66886a4d064b7b08406e5a4bc6d23058a41ba996b9884b4f64d6754c103875ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423395120296407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nb6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79e36cc-eaa9-45da-8a3e-414424129991,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1ca202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d,PodSandboxId:c2c36a7b6bfa8a65d8348c69a7dae56e31c200d9434a246e458acdb3224fb7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423393909712676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qz4fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2e604-2026-486a-befa-f5a310cb017e,},Annotations:map[string]string{io.kubernetes.container.hash: 706ffae6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb,PodSandboxId:582bb3c4a02a4eb6a565070f2c570cde9a20bd720c669b2b7cbfc40de1b5825e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423370602662947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdf9d27fc79998ff10cd42f97ebf247,},An
notations:map[string]string{io.kubernetes.container.hash: 5c4b8ad8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67,PodSandboxId:cc828429d9674e5064b1d6e5e61f52e36059cfba7733c796173ca628407718ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423370021875478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab75e7f925ea4b422d1ed1ea4cb05b,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c,PodSandboxId:7d75363a0bc42756e54e2ecf0d3d3d81e8277decd303de104e97b962b5c75345,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423369706580830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30745ca1cb04441d80b995fd749431e1,},Annotations:map[string
]string{io.kubernetes.container.hash: 76ab2f46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e,PodSandboxId:352e4d00108afe8179d427389e3ef1ec10ee248650388144f0b967f8b32a759a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423369588842263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76bc9ee5c92f8a661163c2be8ef3952
1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a4e721c-2018-4734-b433-519d461f50a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.584207933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51da2762-e0fb-4c22-9072-c749eb85cc32 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.584293596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51da2762-e0fb-4c22-9072-c749eb85cc32 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.585497456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8f812800-0c2d-451b-a4f5-193f9df23ce7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.585892115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423939585880305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8f812800-0c2d-451b-a4f5-193f9df23ce7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.586829058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bd4e2859-e63f-4e93-bddc-123e02a5292b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.586879115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bd4e2859-e63f-4e93-bddc-123e02a5292b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.587039865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed,PodSandboxId:129efa3c9d2cf04a810cb065d63a0cf271af463484b135d013d2332f8cea6d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423395644393401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a660d9e-2a10-49de-bb1d-fd237aa3345e,},Annotations:map[string]string{io.kubernetes.container.hash: c8c28f82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30,PodSandboxId:66886a4d064b7b08406e5a4bc6d23058a41ba996b9884b4f64d6754c103875ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423395120296407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nb6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79e36cc-eaa9-45da-8a3e-414424129991,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1ca202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d,PodSandboxId:c2c36a7b6bfa8a65d8348c69a7dae56e31c200d9434a246e458acdb3224fb7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423393909712676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qz4fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2e604-2026-486a-befa-f5a310cb017e,},Annotations:map[string]string{io.kubernetes.container.hash: 706ffae6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb,PodSandboxId:582bb3c4a02a4eb6a565070f2c570cde9a20bd720c669b2b7cbfc40de1b5825e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423370602662947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdf9d27fc79998ff10cd42f97ebf247,},An
notations:map[string]string{io.kubernetes.container.hash: 5c4b8ad8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67,PodSandboxId:cc828429d9674e5064b1d6e5e61f52e36059cfba7733c796173ca628407718ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423370021875478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab75e7f925ea4b422d1ed1ea4cb05b,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c,PodSandboxId:7d75363a0bc42756e54e2ecf0d3d3d81e8277decd303de104e97b962b5c75345,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423369706580830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30745ca1cb04441d80b995fd749431e1,},Annotations:map[string
]string{io.kubernetes.container.hash: 76ab2f46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e,PodSandboxId:352e4d00108afe8179d427389e3ef1ec10ee248650388144f0b967f8b32a759a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423369588842263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76bc9ee5c92f8a661163c2be8ef3952
1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bd4e2859-e63f-4e93-bddc-123e02a5292b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.631613116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=716aa949-7fc6-475a-88ee-2524b7ab89d4 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.631672730Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=716aa949-7fc6-475a-88ee-2524b7ab89d4 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.633872515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f462d2d1-612c-4457-ac1d-989a9ebe30b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.634327458Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423939634312598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f462d2d1-612c-4457-ac1d-989a9ebe30b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.635888729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a88ddac4-b414-41a7-a6c2-c173abd372eb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.635991318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a88ddac4-b414-41a7-a6c2-c173abd372eb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.636226770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed,PodSandboxId:129efa3c9d2cf04a810cb065d63a0cf271af463484b135d013d2332f8cea6d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423395644393401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a660d9e-2a10-49de-bb1d-fd237aa3345e,},Annotations:map[string]string{io.kubernetes.container.hash: c8c28f82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30,PodSandboxId:66886a4d064b7b08406e5a4bc6d23058a41ba996b9884b4f64d6754c103875ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423395120296407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nb6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79e36cc-eaa9-45da-8a3e-414424129991,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1ca202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d,PodSandboxId:c2c36a7b6bfa8a65d8348c69a7dae56e31c200d9434a246e458acdb3224fb7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423393909712676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qz4fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2e604-2026-486a-befa-f5a310cb017e,},Annotations:map[string]string{io.kubernetes.container.hash: 706ffae6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb,PodSandboxId:582bb3c4a02a4eb6a565070f2c570cde9a20bd720c669b2b7cbfc40de1b5825e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423370602662947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdf9d27fc79998ff10cd42f97ebf247,},An
notations:map[string]string{io.kubernetes.container.hash: 5c4b8ad8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67,PodSandboxId:cc828429d9674e5064b1d6e5e61f52e36059cfba7733c796173ca628407718ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423370021875478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab75e7f925ea4b422d1ed1ea4cb05b,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c,PodSandboxId:7d75363a0bc42756e54e2ecf0d3d3d81e8277decd303de104e97b962b5c75345,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423369706580830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30745ca1cb04441d80b995fd749431e1,},Annotations:map[string
]string{io.kubernetes.container.hash: 76ab2f46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e,PodSandboxId:352e4d00108afe8179d427389e3ef1ec10ee248650388144f0b967f8b32a759a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423369588842263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76bc9ee5c92f8a661163c2be8ef3952
1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a88ddac4-b414-41a7-a6c2-c173abd372eb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.681339562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6d057721-acc2-4539-9a71-8a6cfd720771 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.681499821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6d057721-acc2-4539-9a71-8a6cfd720771 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.682665790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7e3aba3c-e35d-4fdc-bc59-9c5019998321 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.683110450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702423939683098031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7e3aba3c-e35d-4fdc-bc59-9c5019998321 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.683800284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=11c5715b-b278-4c5e-a829-3f4a993c896d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.683875019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=11c5715b-b278-4c5e-a829-3f4a993c896d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:32:19 embed-certs-809120 crio[710]: time="2023-12-12 23:32:19.684067718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed,PodSandboxId:129efa3c9d2cf04a810cb065d63a0cf271af463484b135d013d2332f8cea6d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423395644393401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a660d9e-2a10-49de-bb1d-fd237aa3345e,},Annotations:map[string]string{io.kubernetes.container.hash: c8c28f82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30,PodSandboxId:66886a4d064b7b08406e5a4bc6d23058a41ba996b9884b4f64d6754c103875ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423395120296407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nb6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79e36cc-eaa9-45da-8a3e-414424129991,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1ca202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d,PodSandboxId:c2c36a7b6bfa8a65d8348c69a7dae56e31c200d9434a246e458acdb3224fb7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423393909712676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qz4fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2e604-2026-486a-befa-f5a310cb017e,},Annotations:map[string]string{io.kubernetes.container.hash: 706ffae6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb,PodSandboxId:582bb3c4a02a4eb6a565070f2c570cde9a20bd720c669b2b7cbfc40de1b5825e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423370602662947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdf9d27fc79998ff10cd42f97ebf247,},An
notations:map[string]string{io.kubernetes.container.hash: 5c4b8ad8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67,PodSandboxId:cc828429d9674e5064b1d6e5e61f52e36059cfba7733c796173ca628407718ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423370021875478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab75e7f925ea4b422d1ed1ea4cb05b,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c,PodSandboxId:7d75363a0bc42756e54e2ecf0d3d3d81e8277decd303de104e97b962b5c75345,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423369706580830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30745ca1cb04441d80b995fd749431e1,},Annotations:map[string
]string{io.kubernetes.container.hash: 76ab2f46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e,PodSandboxId:352e4d00108afe8179d427389e3ef1ec10ee248650388144f0b967f8b32a759a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423369588842263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76bc9ee5c92f8a661163c2be8ef3952
1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=11c5715b-b278-4c5e-a829-3f4a993c896d name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c34f627c7cd17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   129efa3c9d2cf       storage-provisioner
	66342a7ece6d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   66886a4d064b7       kube-proxy-4nb6w
	6a3df78435249       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   c2c36a7b6bfa8       coredns-5dd5756b68-qz4fn
	486b5230383fb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   582bb3c4a02a4       etcd-embed-certs-809120
	e7edb497978d8       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   cc828429d9674       kube-scheduler-embed-certs-809120
	446438e29bfad       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   7d75363a0bc42       kube-apiserver-embed-certs-809120
	4fb33f05a9153       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   352e4d00108af       kube-controller-manager-embed-certs-809120
	
	* 
	* ==> coredns [6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-809120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-809120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=embed-certs-809120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_22_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:22:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-809120
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:32:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:28:26 +0000   Tue, 12 Dec 2023 23:22:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:28:26 +0000   Tue, 12 Dec 2023 23:22:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:28:26 +0000   Tue, 12 Dec 2023 23:22:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:28:26 +0000   Tue, 12 Dec 2023 23:23:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.221
	  Hostname:    embed-certs-809120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c22750f7cb4d4371bfa7e3d7f47269f3
	  System UUID:                c22750f7-cb4d-4371-bfa7-e3d7f47269f3
	  Boot ID:                    57045704-b81b-4b73-a22d-c562c550e68a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qz4fn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-embed-certs-809120                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m24s
	  kube-system                 kube-apiserver-embed-certs-809120             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-809120    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-4nb6w                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-embed-certs-809120             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-m6nc6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node embed-certs-809120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node embed-certs-809120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node embed-certs-809120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node embed-certs-809120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node embed-certs-809120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node embed-certs-809120 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node embed-certs-809120 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m11s                  kubelet          Node embed-certs-809120 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node embed-certs-809120 event: Registered Node embed-certs-809120 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071652] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.775025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.820104] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147938] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.515272] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.661321] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.114182] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.162862] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.117114] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.253386] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[Dec12 23:18] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[ +19.354754] kauditd_printk_skb: 29 callbacks suppressed
	[Dec12 23:22] systemd-fstab-generator[3526]: Ignoring "noauto" for root device
	[  +9.806829] systemd-fstab-generator[3852]: Ignoring "noauto" for root device
	[Dec12 23:23] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb] <==
	* {"level":"info","ts":"2023-12-12T23:22:52.209631Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.221:2380"}
	{"level":"info","ts":"2023-12-12T23:22:52.210156Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"7e2ae951029168ce","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-12-12T23:22:52.210235Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:22:52.210701Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:22:52.210796Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T23:22:52.210594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce switched to configuration voters=(9091335331945474254)"}
	{"level":"info","ts":"2023-12-12T23:22:52.211067Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"35ecb74b0d77a53b","local-member-id":"7e2ae951029168ce","added-peer-id":"7e2ae951029168ce","added-peer-peer-urls":["https://192.168.50.221:2380"]}
	{"level":"info","ts":"2023-12-12T23:22:52.48353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:52.483691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:52.483753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce received MsgPreVoteResp from 7e2ae951029168ce at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:52.483783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:52.483811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce received MsgVoteResp from 7e2ae951029168ce at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:52.483838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:52.483864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7e2ae951029168ce elected leader 7e2ae951029168ce at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:52.485263Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7e2ae951029168ce","local-member-attributes":"{Name:embed-certs-809120 ClientURLs:[https://192.168.50.221:2379]}","request-path":"/0/members/7e2ae951029168ce/attributes","cluster-id":"35ecb74b0d77a53b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:22:52.4855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:22:52.486605Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.221:2379"}
	{"level":"info","ts":"2023-12-12T23:22:52.486711Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:52.486854Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:22:52.487928Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"35ecb74b0d77a53b","local-member-id":"7e2ae951029168ce","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:52.488055Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:52.488095Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:52.491386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:22:52.491571Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:22:52.491662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:32:20 up 14 min,  0 users,  load average: 0.11, 0.22, 0.20
	Linux embed-certs-809120 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c] <==
	* W1212 23:27:55.227058       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:27:55.227262       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:27:55.227413       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:27:55.227187       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:27:55.227613       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:27:55.228789       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:28:54.114667       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:28:55.227795       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:28:55.227890       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:28:55.227917       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:28:55.229173       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:28:55.229249       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:28:55.229274       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:29:54.115422       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 23:30:54.114793       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:30:55.228097       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:30:55.228159       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:30:55.228168       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:30:55.229603       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:30:55.229746       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:30:55.229762       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:31:54.115993       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e] <==
	* I1212 23:26:40.684970       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:27:10.253371       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:27:10.694414       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:27:40.260700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:27:40.704965       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:28:10.275181       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:28:10.714308       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:28:40.281047       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:28:40.724328       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:29:04.956784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="492.241µs"
	E1212 23:29:10.286603       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:29:10.733381       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:29:17.954238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="192.693µs"
	E1212 23:29:40.293577       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:29:40.743293       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:30:10.303394       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:30:10.753932       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:30:40.309656       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:30:40.762375       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:31:10.316038       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:31:10.772598       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:31:40.322309       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:31:40.783080       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:32:10.331066       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:32:10.794101       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30] <==
	* I1212 23:23:15.719395       1 server_others.go:69] "Using iptables proxy"
	I1212 23:23:15.752079       1 node.go:141] Successfully retrieved node IP: 192.168.50.221
	I1212 23:23:15.878158       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:23:15.878236       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:23:15.881610       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:23:15.881671       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:23:15.881899       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:23:15.881934       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:23:15.884123       1 config.go:188] "Starting service config controller"
	I1212 23:23:15.884177       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:23:15.884207       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:23:15.884211       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:23:15.886688       1 config.go:315] "Starting node config controller"
	I1212 23:23:15.886727       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:23:15.984492       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:23:15.984551       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:23:15.986799       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67] <==
	* W1212 23:22:54.316674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:22:54.316682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:22:54.316707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:22:54.316745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:22:55.193996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:22:55.194121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:22:55.213127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:22:55.213248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:22:55.240412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:22:55.240685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:22:55.288805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:22:55.288940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:22:55.302393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:22:55.302518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:22:55.378856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:22:55.378941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:22:55.478283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:22:55.478527       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 23:22:55.479286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:22:55.479352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:22:55.522516       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:22:55.522687       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:22:55.549805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:22:55.549855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:22:57.987942       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:17:40 UTC, ends at Tue 2023-12-12 23:32:20 UTC. --
	Dec 12 23:29:46 embed-certs-809120 kubelet[3859]: E1212 23:29:46.936378    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:29:57 embed-certs-809120 kubelet[3859]: E1212 23:29:57.936694    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:29:57 embed-certs-809120 kubelet[3859]: E1212 23:29:57.962742    3859 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:29:57 embed-certs-809120 kubelet[3859]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:29:57 embed-certs-809120 kubelet[3859]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:29:57 embed-certs-809120 kubelet[3859]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:30:10 embed-certs-809120 kubelet[3859]: E1212 23:30:10.935351    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:30:23 embed-certs-809120 kubelet[3859]: E1212 23:30:23.935628    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:30:36 embed-certs-809120 kubelet[3859]: E1212 23:30:36.935939    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:30:51 embed-certs-809120 kubelet[3859]: E1212 23:30:51.937041    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:30:57 embed-certs-809120 kubelet[3859]: E1212 23:30:57.962739    3859 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:30:57 embed-certs-809120 kubelet[3859]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:30:57 embed-certs-809120 kubelet[3859]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:30:57 embed-certs-809120 kubelet[3859]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:31:02 embed-certs-809120 kubelet[3859]: E1212 23:31:02.935826    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:31:14 embed-certs-809120 kubelet[3859]: E1212 23:31:14.935168    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:31:26 embed-certs-809120 kubelet[3859]: E1212 23:31:26.936083    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:31:39 embed-certs-809120 kubelet[3859]: E1212 23:31:39.936286    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:31:54 embed-certs-809120 kubelet[3859]: E1212 23:31:54.935960    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:31:57 embed-certs-809120 kubelet[3859]: E1212 23:31:57.962645    3859 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:31:57 embed-certs-809120 kubelet[3859]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:31:57 embed-certs-809120 kubelet[3859]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:31:57 embed-certs-809120 kubelet[3859]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:32:07 embed-certs-809120 kubelet[3859]: E1212 23:32:07.936400    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:32:18 embed-certs-809120 kubelet[3859]: E1212 23:32:18.936775    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	
	* 
	* ==> storage-provisioner [c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed] <==
	* I1212 23:23:15.814633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:23:15.825151       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:23:15.825228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:23:15.836554       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:23:15.836754       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-809120_416095db-2a9e-4d5d-ae51-9f8c4bf43e1b!
	I1212 23:23:15.837947       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c92bc176-fa01-4b7d-ab51-dd432abe9c92", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-809120_416095db-2a9e-4d5d-ae51-9f8c4bf43e1b became leader
	I1212 23:23:15.938030       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-809120_416095db-2a9e-4d5d-ae51-9f8c4bf43e1b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-809120 -n embed-certs-809120
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-809120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-m6nc6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-809120 describe pod metrics-server-57f55c9bc5-m6nc6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-809120 describe pod metrics-server-57f55c9bc5-m6nc6: exit status 1 (78.116509ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-m6nc6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-809120 describe pod metrics-server-57f55c9bc5-m6nc6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 23:27:09.616830   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:27:14.817931   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:27:33.175212   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:28:02.620901   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:28:13.361563   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:28:16.111961   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:28:32.662723   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:29:01.564835   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:29:17.803646   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:29:36.406126   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:30:02.202997   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:30:25.172277   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 23:30:51.771380   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:31:10.129186   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-549640 -n old-k8s-version-549640
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-12 23:36:00.794362039 +0000 UTC m=+5600.605419480
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-549640 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-549640 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (64.096212ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-549640 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-549640 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-549640 logs -n 25: (1.710030781s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-828988 sudo cat                              | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo                                  | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo find                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-828988 sudo crio                             | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-828988                                       | bridge-828988                | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-685244 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:08 UTC |
	|         | disable-driver-mounts-685244                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:09 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-809120            | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-549640        | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-115023             | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850839  | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-809120                 | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-549640             | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-115023                  | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850839       | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:22 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:12:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:12:31.006246  128282 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:12:31.006380  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006389  128282 out.go:309] Setting ErrFile to fd 2...
	I1212 23:12:31.006393  128282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:12:31.006549  128282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:12:31.007106  128282 out.go:303] Setting JSON to false
	I1212 23:12:31.008035  128282 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14105,"bootTime":1702408646,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:12:31.008097  128282 start.go:138] virtualization: kvm guest
	I1212 23:12:31.010317  128282 out.go:177] * [default-k8s-diff-port-850839] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:12:31.011782  128282 notify.go:220] Checking for updates...
	I1212 23:12:31.011787  128282 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:12:31.013177  128282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:12:31.014626  128282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:12:31.016153  128282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:12:31.017420  128282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:12:31.018789  128282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:12:31.020548  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:12:31.021022  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.021073  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.036337  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I1212 23:12:31.036724  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.037285  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.037315  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.037677  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.037910  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.038190  128282 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:12:31.038482  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:12:31.038521  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:12:31.052455  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46557
	I1212 23:12:31.052897  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:12:31.053408  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:12:31.053428  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:12:31.053842  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:12:31.054041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:12:31.090916  128282 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 23:12:31.092159  128282 start.go:298] selected driver: kvm2
	I1212 23:12:31.092174  128282 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.092313  128282 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:12:31.092991  128282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.093081  128282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:12:31.108612  128282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:12:31.108979  128282 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 23:12:31.109050  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:12:31.109064  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:12:31.109078  128282 start_flags.go:323] config:
	{Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85083
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:12:31.109261  128282 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:12:31.110991  128282 out.go:177] * Starting control plane node default-k8s-diff-port-850839 in cluster default-k8s-diff-port-850839
	I1212 23:12:28.611488  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:31.112184  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:12:31.112223  128282 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 23:12:31.112231  128282 cache.go:56] Caching tarball of preloaded images
	I1212 23:12:31.112315  128282 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:12:31.112331  128282 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 23:12:31.112435  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:12:31.112621  128282 start.go:365] acquiring machines lock for default-k8s-diff-port-850839: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:12:34.691505  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:37.763538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:43.843515  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:46.915553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:52.995487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:12:56.067468  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:02.147575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:05.219586  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:11.299553  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:14.371547  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:20.451538  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:23.523565  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:29.603544  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:32.675516  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:38.755580  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:41.827595  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:47.907601  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:50.979707  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:13:57.059532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:00.131511  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:06.211489  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:09.283534  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:15.363535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:18.435583  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:24.515478  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:27.587546  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:33.667567  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:36.739532  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:42.819531  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:45.891616  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:51.971509  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:14:55.043560  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:01.123510  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:04.195575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:10.275535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:13.347520  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:19.427542  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:22.499524  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:28.579575  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:31.651552  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:37.731535  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:40.803533  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:46.883561  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:49.955571  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:56.035557  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:15:59.107536  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:05.187487  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:08.259527  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:14.339497  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:17.411598  127760 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.221:22: connect: no route to host
	I1212 23:16:20.416121  127900 start.go:369] acquired machines lock for "old-k8s-version-549640" in 4m27.702597236s
	I1212 23:16:20.416185  127900 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:20.416197  127900 fix.go:54] fixHost starting: 
	I1212 23:16:20.416598  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:20.416638  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:20.431626  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I1212 23:16:20.432088  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:20.432550  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:16:20.432573  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:20.432976  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:20.433174  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:20.433352  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:16:20.435450  127900 fix.go:102] recreateIfNeeded on old-k8s-version-549640: state=Stopped err=<nil>
	I1212 23:16:20.435477  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	W1212 23:16:20.435650  127900 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:20.437467  127900 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-549640" ...
	I1212 23:16:20.438890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Start
	I1212 23:16:20.439060  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring networks are active...
	I1212 23:16:20.439992  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network default is active
	I1212 23:16:20.440387  127900 main.go:141] libmachine: (old-k8s-version-549640) Ensuring network mk-old-k8s-version-549640 is active
	I1212 23:16:20.440738  127900 main.go:141] libmachine: (old-k8s-version-549640) Getting domain xml...
	I1212 23:16:20.441435  127900 main.go:141] libmachine: (old-k8s-version-549640) Creating domain...
	I1212 23:16:21.692826  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting to get IP...
	I1212 23:16:21.693784  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.694269  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.694313  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.694229  128878 retry.go:31] will retry after 250.302126ms: waiting for machine to come up
	I1212 23:16:21.945651  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:21.946122  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:21.946145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:21.946067  128878 retry.go:31] will retry after 271.460868ms: waiting for machine to come up
	I1212 23:16:22.219848  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.220326  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.220352  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.220248  128878 retry.go:31] will retry after 466.723624ms: waiting for machine to come up
	I1212 23:16:20.413611  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:20.413648  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:16:20.415967  127760 machine.go:91] provisioned docker machine in 4m37.407647774s
	I1212 23:16:20.416013  127760 fix.go:56] fixHost completed within 4m37.429684827s
	I1212 23:16:20.416025  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 4m37.429713708s
	W1212 23:16:20.416055  127760 start.go:694] error starting host: provision: host is not running
	W1212 23:16:20.416230  127760 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1212 23:16:20.416241  127760 start.go:709] Will try again in 5 seconds ...
	I1212 23:16:22.689020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:22.689524  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:22.689559  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:22.689474  128878 retry.go:31] will retry after 384.986526ms: waiting for machine to come up
	I1212 23:16:23.076020  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.076428  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.076462  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.076365  128878 retry.go:31] will retry after 673.784203ms: waiting for machine to come up
	I1212 23:16:23.752374  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:23.752825  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:23.752859  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:23.752777  128878 retry.go:31] will retry after 744.371791ms: waiting for machine to come up
	I1212 23:16:24.498624  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:24.499057  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:24.499088  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:24.498994  128878 retry.go:31] will retry after 1.095766265s: waiting for machine to come up
	I1212 23:16:25.596742  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:25.597192  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:25.597217  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:25.597133  128878 retry.go:31] will retry after 1.340596782s: waiting for machine to come up
	I1212 23:16:26.939593  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:26.939933  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:26.939957  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:26.939881  128878 retry.go:31] will retry after 1.546075974s: waiting for machine to come up
	I1212 23:16:25.417922  127760 start.go:365] acquiring machines lock for embed-certs-809120: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:16:28.488543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:28.488923  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:28.488949  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:28.488883  128878 retry.go:31] will retry after 2.06517547s: waiting for machine to come up
	I1212 23:16:30.555809  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:30.556300  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:30.556330  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:30.556262  128878 retry.go:31] will retry after 2.237409729s: waiting for machine to come up
	I1212 23:16:32.796273  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:32.796684  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:32.796712  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:32.796629  128878 retry.go:31] will retry after 3.535954383s: waiting for machine to come up
	I1212 23:16:36.333758  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:36.334211  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | unable to find current IP address of domain old-k8s-version-549640 in network mk-old-k8s-version-549640
	I1212 23:16:36.334243  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | I1212 23:16:36.334143  128878 retry.go:31] will retry after 3.820382113s: waiting for machine to come up
	I1212 23:16:41.367963  128156 start.go:369] acquired machines lock for "no-preload-115023" in 4m21.778030837s
	I1212 23:16:41.368034  128156 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:16:41.368046  128156 fix.go:54] fixHost starting: 
	I1212 23:16:41.368459  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:16:41.368498  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:16:41.384557  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1212 23:16:41.385004  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:16:41.385448  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:16:41.385471  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:16:41.385799  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:16:41.386007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:16:41.386192  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:16:41.387807  128156 fix.go:102] recreateIfNeeded on no-preload-115023: state=Stopped err=<nil>
	I1212 23:16:41.387858  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	W1212 23:16:41.388030  128156 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:16:41.390189  128156 out.go:177] * Restarting existing kvm2 VM for "no-preload-115023" ...
	I1212 23:16:40.159111  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159503  127900 main.go:141] libmachine: (old-k8s-version-549640) Found IP for machine: 192.168.61.146
	I1212 23:16:40.159530  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserving static IP address...
	I1212 23:16:40.159543  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has current primary IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.159970  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.160042  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | skip adding static IP to network mk-old-k8s-version-549640 - found existing host DHCP lease matching {name: "old-k8s-version-549640", mac: "52:54:00:e7:8c:5e", ip: "192.168.61.146"}
	I1212 23:16:40.160060  127900 main.go:141] libmachine: (old-k8s-version-549640) Reserved static IP address: 192.168.61.146
	I1212 23:16:40.160072  127900 main.go:141] libmachine: (old-k8s-version-549640) Waiting for SSH to be available...
	I1212 23:16:40.160087  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Getting to WaitForSSH function...
	I1212 23:16:40.162048  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162377  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.162417  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.162498  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH client type: external
	I1212 23:16:40.162571  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa (-rw-------)
	I1212 23:16:40.162609  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:16:40.162629  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | About to run SSH command:
	I1212 23:16:40.162644  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | exit 0
	I1212 23:16:40.254804  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | SSH cmd err, output: <nil>: 
	I1212 23:16:40.255235  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetConfigRaw
	I1212 23:16:40.255885  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.258196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258526  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.258551  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.258806  127900 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/config.json ...
	I1212 23:16:40.259036  127900 machine.go:88] provisioning docker machine ...
	I1212 23:16:40.259059  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:40.259292  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259454  127900 buildroot.go:166] provisioning hostname "old-k8s-version-549640"
	I1212 23:16:40.259475  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.259624  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.261311  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261561  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.261583  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.261686  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.261818  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.261974  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.262114  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.262270  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.262645  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.262666  127900 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-549640 && echo "old-k8s-version-549640" | sudo tee /etc/hostname
	I1212 23:16:40.395342  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-549640
	
	I1212 23:16:40.395376  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.398008  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398391  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.398430  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.398533  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.398716  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.398890  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.399024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.399152  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.399489  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.399510  127900 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-549640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-549640/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-549640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:16:40.526781  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:16:40.526824  127900 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:16:40.526847  127900 buildroot.go:174] setting up certificates
	I1212 23:16:40.526859  127900 provision.go:83] configureAuth start
	I1212 23:16:40.526877  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetMachineName
	I1212 23:16:40.527276  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:40.530483  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.530876  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.530908  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.531162  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.533161  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533456  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.533488  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.533567  127900 provision.go:138] copyHostCerts
	I1212 23:16:40.533625  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:16:40.533645  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:16:40.533711  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:16:40.533799  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:16:40.533806  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:16:40.533829  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:16:40.533882  127900 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:16:40.533889  127900 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:16:40.533913  127900 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:16:40.533957  127900 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-549640 san=[192.168.61.146 192.168.61.146 localhost 127.0.0.1 minikube old-k8s-version-549640]
	I1212 23:16:40.630542  127900 provision.go:172] copyRemoteCerts
	I1212 23:16:40.630611  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:16:40.630639  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.633145  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633408  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.633433  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.633579  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.633790  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.633944  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.634162  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:40.725498  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 23:16:40.748097  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:16:40.769852  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:16:40.791381  127900 provision.go:86] duration metric: configureAuth took 264.501961ms
	I1212 23:16:40.791417  127900 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:16:40.791602  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:16:40.791678  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:40.794113  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794479  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:40.794514  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:40.794653  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:40.794864  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795055  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:40.795234  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:40.795443  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:40.795777  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:40.795807  127900 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:16:41.103469  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:16:41.103503  127900 machine.go:91] provisioned docker machine in 844.450063ms
	I1212 23:16:41.103517  127900 start.go:300] post-start starting for "old-k8s-version-549640" (driver="kvm2")
	I1212 23:16:41.103527  127900 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:16:41.103547  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.103894  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:16:41.103923  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.106459  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.106835  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.106864  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.107013  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.107190  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.107363  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.107532  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.201177  127900 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:16:41.205686  127900 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:16:41.205711  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:16:41.205773  127900 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:16:41.205862  127900 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:16:41.205970  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:16:41.214591  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:41.240854  127900 start.go:303] post-start completed in 137.32025ms
	I1212 23:16:41.240885  127900 fix.go:56] fixHost completed within 20.824687398s
	I1212 23:16:41.240915  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.243633  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244071  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.244104  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.244300  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.244517  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244651  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.244806  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.244981  127900 main.go:141] libmachine: Using SSH client type: native
	I1212 23:16:41.245337  127900 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I1212 23:16:41.245350  127900 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:16:41.367815  127900 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423001.317394085
	
	I1212 23:16:41.367837  127900 fix.go:206] guest clock: 1702423001.317394085
	I1212 23:16:41.367844  127900 fix.go:219] Guest: 2023-12-12 23:16:41.317394085 +0000 UTC Remote: 2023-12-12 23:16:41.240889292 +0000 UTC m=+288.685284781 (delta=76.504793ms)
	I1212 23:16:41.367863  127900 fix.go:190] guest clock delta is within tolerance: 76.504793ms
	I1212 23:16:41.367868  127900 start.go:83] releasing machines lock for "old-k8s-version-549640", held for 20.951706122s
	I1212 23:16:41.367895  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.368219  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:41.370769  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371172  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.371196  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.371378  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.371904  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372069  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:16:41.372157  127900 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:16:41.372206  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.372409  127900 ssh_runner.go:195] Run: cat /version.json
	I1212 23:16:41.372438  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:16:41.374847  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.374869  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375341  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375373  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375401  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:41.375419  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:41.375526  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375661  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:16:41.375749  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.375835  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:16:41.376026  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376031  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.376221  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:16:41.488636  127900 ssh_runner.go:195] Run: systemctl --version
	I1212 23:16:41.494315  127900 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:16:41.645474  127900 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:16:41.652912  127900 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:16:41.652988  127900 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:16:41.667662  127900 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:16:41.667680  127900 start.go:475] detecting cgroup driver to use...
	I1212 23:16:41.667747  127900 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:16:41.681625  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:16:41.693475  127900 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:16:41.693540  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:16:41.705743  127900 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:16:41.719152  127900 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:16:41.819641  127900 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:16:41.929543  127900 docker.go:219] disabling docker service ...
	I1212 23:16:41.929617  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:16:41.943407  127900 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:16:41.955372  127900 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:16:42.063078  127900 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:16:42.177422  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:16:42.192994  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:16:42.211887  127900 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1212 23:16:42.211943  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.223418  127900 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:16:42.223486  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.234905  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.245973  127900 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:16:42.261016  127900 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:16:42.272819  127900 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:16:42.283308  127900 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:16:42.283381  127900 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:16:42.296365  127900 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:16:42.307038  127900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:16:42.412672  127900 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:16:42.590363  127900 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:16:42.590470  127900 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:16:42.596285  127900 start.go:543] Will wait 60s for crictl version
	I1212 23:16:42.596360  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:42.600633  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:16:42.638709  127900 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:16:42.638811  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.694435  127900 ssh_runner.go:195] Run: crio --version
	I1212 23:16:42.750327  127900 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1212 23:16:41.391501  128156 main.go:141] libmachine: (no-preload-115023) Calling .Start
	I1212 23:16:41.391671  128156 main.go:141] libmachine: (no-preload-115023) Ensuring networks are active...
	I1212 23:16:41.392314  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network default is active
	I1212 23:16:41.392624  128156 main.go:141] libmachine: (no-preload-115023) Ensuring network mk-no-preload-115023 is active
	I1212 23:16:41.393075  128156 main.go:141] libmachine: (no-preload-115023) Getting domain xml...
	I1212 23:16:41.393720  128156 main.go:141] libmachine: (no-preload-115023) Creating domain...
	I1212 23:16:42.669200  128156 main.go:141] libmachine: (no-preload-115023) Waiting to get IP...
	I1212 23:16:42.670068  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.670482  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.670582  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.670462  128998 retry.go:31] will retry after 201.350715ms: waiting for machine to come up
	I1212 23:16:42.874061  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:42.874543  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:42.874576  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:42.874492  128998 retry.go:31] will retry after 331.205906ms: waiting for machine to come up
	I1212 23:16:43.207045  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.207590  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.207618  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.207533  128998 retry.go:31] will retry after 343.139691ms: waiting for machine to come up
	I1212 23:16:43.552253  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:43.552737  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:43.552769  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:43.552683  128998 retry.go:31] will retry after 606.192126ms: waiting for machine to come up
	I1212 23:16:44.160409  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.160877  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.160923  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.160842  128998 retry.go:31] will retry after 713.164162ms: waiting for machine to come up
	I1212 23:16:42.751897  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetIP
	I1212 23:16:42.754490  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.754832  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:16:42.754867  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:16:42.755047  127900 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 23:16:42.759290  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:42.770851  127900 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 23:16:42.770913  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:42.822484  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:42.822559  127900 ssh_runner.go:195] Run: which lz4
	I1212 23:16:42.826907  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:16:42.831601  127900 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:16:42.831633  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1212 23:16:44.643588  127900 crio.go:444] Took 1.816704 seconds to copy over tarball
	I1212 23:16:44.643671  127900 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:16:47.603870  127900 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960150759s)
	I1212 23:16:47.603904  127900 crio.go:451] Took 2.960288 seconds to extract the tarball
	I1212 23:16:47.603918  127900 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:16:44.875548  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:44.875971  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:44.876003  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:44.875908  128998 retry.go:31] will retry after 928.762857ms: waiting for machine to come up
	I1212 23:16:45.806556  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:45.806983  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:45.807019  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:45.806932  128998 retry.go:31] will retry after 945.322601ms: waiting for machine to come up
	I1212 23:16:46.754374  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:46.754834  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:46.754869  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:46.754818  128998 retry.go:31] will retry after 1.373584303s: waiting for machine to come up
	I1212 23:16:48.130434  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:48.130917  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:48.130950  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:48.130870  128998 retry.go:31] will retry after 1.683447661s: waiting for machine to come up
	I1212 23:16:47.644193  127900 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:16:47.696129  127900 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1212 23:16:47.696156  127900 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.696314  127900 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.696273  127900 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.696243  127900 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.696242  127900 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.696306  127900 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.696371  127900 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.696445  127900 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1212 23:16:47.697649  127900 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.697713  127900 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.697816  127900 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.697955  127900 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1212 23:16:47.698013  127900 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.698109  127900 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.698124  127900 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.698341  127900 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:47.888397  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:47.897712  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:47.897790  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1212 23:16:47.910016  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1212 23:16:47.911074  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:47.912891  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:47.923071  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:47.995042  127900 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:16:48.022161  127900 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1212 23:16:48.022215  127900 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.022270  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053132  127900 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1212 23:16:48.053181  127900 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.053236  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.053493  127900 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1212 23:16:48.053531  127900 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.053588  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.123888  127900 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1212 23:16:48.123949  127900 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.123889  127900 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1212 23:16:48.124009  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124022  127900 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1212 23:16:48.124077  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124089  127900 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1212 23:16:48.124111  127900 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1212 23:16:48.124141  127900 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.124171  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.124115  127900 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.124249  127900 ssh_runner.go:195] Run: which crictl
	I1212 23:16:48.205456  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1212 23:16:48.205488  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1212 23:16:48.205609  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1212 23:16:48.205650  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1212 23:16:48.205702  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1212 23:16:48.205789  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1212 23:16:48.205814  127900 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1212 23:16:48.351665  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1212 23:16:48.351700  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1212 23:16:48.360026  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1212 23:16:48.363255  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1212 23:16:48.363297  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1212 23:16:48.363376  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1212 23:16:48.363413  127900 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1212 23:16:48.363525  127900 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369271  127900 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1212 23:16:48.369289  127900 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1212 23:16:48.369326  127900 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1212 23:16:50.628595  127900 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.259242667s)
	I1212 23:16:50.628629  127900 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1212 23:16:50.628679  127900 cache_images.go:92] LoadImages completed in 2.932510127s
	W1212 23:16:50.628774  127900 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1212 23:16:50.628871  127900 ssh_runner.go:195] Run: crio config
	I1212 23:16:50.696623  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:16:50.696645  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:16:50.696665  127900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:16:50.696690  127900 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.146 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-549640 NodeName:old-k8s-version-549640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 23:16:50.696857  127900 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-549640"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-549640
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.146:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:16:50.696950  127900 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-549640 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:16:50.697013  127900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1212 23:16:50.706222  127900 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:16:50.706309  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:16:50.714679  127900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 23:16:50.732119  127900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:16:50.749596  127900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1212 23:16:50.766445  127900 ssh_runner.go:195] Run: grep 192.168.61.146	control-plane.minikube.internal$ /etc/hosts
	I1212 23:16:50.770611  127900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:16:50.783162  127900 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640 for IP: 192.168.61.146
	I1212 23:16:50.783205  127900 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:16:50.783434  127900 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:16:50.783504  127900 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:16:50.783623  127900 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.key
	I1212 23:16:50.783701  127900 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key.a124ebb4
	I1212 23:16:50.783781  127900 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key
	I1212 23:16:50.784002  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:16:50.784053  127900 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:16:50.784070  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:16:50.784118  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:16:50.784162  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:16:50.784201  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:16:50.784260  127900 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:16:50.785202  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:16:50.813072  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:16:50.838714  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:16:50.863302  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:16:50.891365  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:16:50.916623  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:16:50.946894  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:16:50.974859  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:16:51.002629  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:16:51.027782  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:16:51.052384  127900 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:16:51.077430  127900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:16:51.094703  127900 ssh_runner.go:195] Run: openssl version
	I1212 23:16:51.100625  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:16:51.111038  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116246  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.116342  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:16:51.122069  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:16:51.132325  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:16:51.142392  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147278  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.147353  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:16:51.153446  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:16:51.163491  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:16:51.173393  127900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178482  127900 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.178560  127900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:16:51.184710  127900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:16:51.194819  127900 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:16:51.199808  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:16:51.206208  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:16:51.212498  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:16:51.218555  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:16:51.224923  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:16:51.231298  127900 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:16:51.237570  127900 kubeadm.go:404] StartCluster: {Name:old-k8s-version-549640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-549640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:16:51.237672  127900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:16:51.237752  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:16:51.283890  127900 cri.go:89] found id: ""
	I1212 23:16:51.283985  127900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:16:51.296861  127900 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:16:51.296897  127900 kubeadm.go:636] restartCluster start
	I1212 23:16:51.296990  127900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:16:51.306034  127900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.307730  127900 kubeconfig.go:92] found "old-k8s-version-549640" server: "https://192.168.61.146:8443"
	I1212 23:16:51.311721  127900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:16:51.320683  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.320831  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.332122  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.332145  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.332197  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.342755  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:51.843464  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:51.843575  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:51.854933  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:52.343493  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.343579  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.354884  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:49.816605  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:49.816934  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:49.816968  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:49.816881  128998 retry.go:31] will retry after 1.775884699s: waiting for machine to come up
	I1212 23:16:51.594388  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:51.594915  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:51.594952  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:51.594866  128998 retry.go:31] will retry after 1.948886075s: waiting for machine to come up
	I1212 23:16:53.546035  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:53.546503  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:53.546538  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:53.546441  128998 retry.go:31] will retry after 3.530621748s: waiting for machine to come up
	I1212 23:16:52.842987  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:52.843085  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:52.854637  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.343155  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.343261  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.354960  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:53.843482  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:53.843555  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:53.854488  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.342926  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.343028  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.357489  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:54.843024  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:54.843111  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:54.854764  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.343252  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.343363  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.354798  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:55.843831  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:55.843931  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:55.855077  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.343753  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.343827  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.354659  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:56.843304  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:56.843423  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:56.854727  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.343292  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.343428  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.354360  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:57.078854  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:16:57.079265  128156 main.go:141] libmachine: (no-preload-115023) DBG | unable to find current IP address of domain no-preload-115023 in network mk-no-preload-115023
	I1212 23:16:57.079287  128156 main.go:141] libmachine: (no-preload-115023) DBG | I1212 23:16:57.079224  128998 retry.go:31] will retry after 3.552473985s: waiting for machine to come up
	I1212 23:17:01.924642  128282 start.go:369] acquired machines lock for "default-k8s-diff-port-850839" in 4m30.811975302s
	I1212 23:17:01.924716  128282 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:01.924725  128282 fix.go:54] fixHost starting: 
	I1212 23:17:01.925164  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:01.925207  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:01.942895  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I1212 23:17:01.943340  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:01.943906  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:01.943938  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:01.944371  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:01.944594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:01.944819  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:01.946719  128282 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850839: state=Stopped err=<nil>
	I1212 23:17:01.946759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	W1212 23:17:01.946947  128282 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:01.949597  128282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850839" ...
	I1212 23:16:57.843410  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:57.843484  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:57.854821  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.343379  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.343470  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.354868  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:58.843473  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:58.843594  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:58.854752  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.343324  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.343432  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.354442  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:16:59.842979  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:16:59.843086  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:16:59.854537  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.343125  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.343201  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.354401  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:00.843565  127900 api_server.go:166] Checking apiserver status ...
	I1212 23:17:00.843642  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:00.854663  127900 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:01.321433  127900 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:01.321466  127900 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:01.321477  127900 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:01.321534  127900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:01.361643  127900 cri.go:89] found id: ""
	I1212 23:17:01.361739  127900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:01.380002  127900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:01.388875  127900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:01.388944  127900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397644  127900 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:01.397690  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:01.528111  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:00.635998  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636444  128156 main.go:141] libmachine: (no-preload-115023) Found IP for machine: 192.168.72.32
	I1212 23:17:00.636462  128156 main.go:141] libmachine: (no-preload-115023) Reserving static IP address...
	I1212 23:17:00.636478  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has current primary IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.636898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.636925  128156 main.go:141] libmachine: (no-preload-115023) DBG | skip adding static IP to network mk-no-preload-115023 - found existing host DHCP lease matching {name: "no-preload-115023", mac: "52:54:00:5e:84:7a", ip: "192.168.72.32"}
	I1212 23:17:00.636939  128156 main.go:141] libmachine: (no-preload-115023) Reserved static IP address: 192.168.72.32
	I1212 23:17:00.636961  128156 main.go:141] libmachine: (no-preload-115023) Waiting for SSH to be available...
	I1212 23:17:00.636971  128156 main.go:141] libmachine: (no-preload-115023) DBG | Getting to WaitForSSH function...
	I1212 23:17:00.639074  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639400  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.639443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.639546  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH client type: external
	I1212 23:17:00.639586  128156 main.go:141] libmachine: (no-preload-115023) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa (-rw-------)
	I1212 23:17:00.639629  128156 main.go:141] libmachine: (no-preload-115023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:00.639644  128156 main.go:141] libmachine: (no-preload-115023) DBG | About to run SSH command:
	I1212 23:17:00.639663  128156 main.go:141] libmachine: (no-preload-115023) DBG | exit 0
	I1212 23:17:00.734735  128156 main.go:141] libmachine: (no-preload-115023) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:00.735132  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetConfigRaw
	I1212 23:17:00.735813  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:00.738429  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.738828  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.738871  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.739049  128156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/config.json ...
	I1212 23:17:00.739276  128156 machine.go:88] provisioning docker machine ...
	I1212 23:17:00.739299  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:00.739537  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739695  128156 buildroot.go:166] provisioning hostname "no-preload-115023"
	I1212 23:17:00.739717  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:00.739879  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.742096  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742404  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.742443  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.742591  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.742756  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.742925  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.743067  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.743224  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.743733  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.743751  128156 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-115023 && echo "no-preload-115023" | sudo tee /etc/hostname
	I1212 23:17:00.888573  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-115023
	
	I1212 23:17:00.888610  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:00.891302  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891619  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:00.891664  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:00.891852  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:00.892092  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892263  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:00.892419  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:00.892584  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:00.892911  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:00.892930  128156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-115023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-115023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-115023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:01.032180  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:01.032222  128156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:01.032257  128156 buildroot.go:174] setting up certificates
	I1212 23:17:01.032273  128156 provision.go:83] configureAuth start
	I1212 23:17:01.032291  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetMachineName
	I1212 23:17:01.032653  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.035024  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035334  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.035361  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.035494  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.037594  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.037898  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.037930  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.038066  128156 provision.go:138] copyHostCerts
	I1212 23:17:01.038122  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:01.038143  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:01.038202  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:01.038322  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:01.038334  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:01.038355  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:01.038470  128156 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:01.038481  128156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:01.038499  128156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:01.038575  128156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.no-preload-115023 san=[192.168.72.32 192.168.72.32 localhost 127.0.0.1 minikube no-preload-115023]
	I1212 23:17:01.146965  128156 provision.go:172] copyRemoteCerts
	I1212 23:17:01.147027  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:01.147053  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.149326  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149621  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.149656  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.149774  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.149969  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.150118  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.150238  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.244271  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:01.267206  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 23:17:01.289286  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:01.311940  128156 provision.go:86] duration metric: configureAuth took 279.648376ms
	I1212 23:17:01.311970  128156 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:01.312144  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:17:01.312229  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.314543  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.314881  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.314907  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.315055  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.315281  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315469  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.315658  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.315821  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.316162  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.316185  128156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:01.644687  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:01.644737  128156 machine.go:91] provisioned docker machine in 905.44182ms
	I1212 23:17:01.644750  128156 start.go:300] post-start starting for "no-preload-115023" (driver="kvm2")
	I1212 23:17:01.644764  128156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:01.644781  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.645148  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:01.645186  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.647976  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648333  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.648369  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.648572  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.648769  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.648972  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.649102  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.746191  128156 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:01.750374  128156 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:01.750416  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:01.750499  128156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:01.750605  128156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:01.750721  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:01.760389  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:01.788014  128156 start.go:303] post-start completed in 143.244652ms
	I1212 23:17:01.788052  128156 fix.go:56] fixHost completed within 20.420006869s
	I1212 23:17:01.788083  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.790868  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791357  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.791392  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.791675  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.791911  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.792276  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.792463  128156 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:01.792889  128156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.72.32 22 <nil> <nil>}
	I1212 23:17:01.792903  128156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:01.924437  128156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423021.865464875
	
	I1212 23:17:01.924464  128156 fix.go:206] guest clock: 1702423021.865464875
	I1212 23:17:01.924477  128156 fix.go:219] Guest: 2023-12-12 23:17:01.865464875 +0000 UTC Remote: 2023-12-12 23:17:01.788058057 +0000 UTC m=+282.352654726 (delta=77.406818ms)
	I1212 23:17:01.924532  128156 fix.go:190] guest clock delta is within tolerance: 77.406818ms
	I1212 23:17:01.924542  128156 start.go:83] releasing machines lock for "no-preload-115023", held for 20.556534447s
	I1212 23:17:01.924581  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.924871  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:01.927873  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928206  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.928238  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.928450  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929098  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929301  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:17:01.929387  128156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:01.929448  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.929516  128156 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:01.929559  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:17:01.932560  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.932593  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933001  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933031  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933059  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:01.933081  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:01.933340  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933430  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:17:01.933547  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933659  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:17:01.933919  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.933923  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:17:01.934097  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:01.934170  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:17:02.029559  128156 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:02.056382  128156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:02.199375  128156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:02.207131  128156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:02.207208  128156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:02.227083  128156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:02.227111  128156 start.go:475] detecting cgroup driver to use...
	I1212 23:17:02.227174  128156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:02.241611  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:02.253610  128156 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:02.253675  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:02.266973  128156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:02.280712  128156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:02.406583  128156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:02.548155  128156 docker.go:219] disabling docker service ...
	I1212 23:17:02.548235  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:02.563410  128156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:02.575968  128156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:02.697146  128156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:02.828963  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:02.842559  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:02.865357  128156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:02.865433  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.878154  128156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:02.878231  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.892188  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.903286  128156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:02.915201  128156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:02.927665  128156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:02.938466  128156 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:02.938538  128156 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:02.954428  128156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:02.966197  128156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:03.109663  128156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:03.322982  128156 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:03.323068  128156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:03.329800  128156 start.go:543] Will wait 60s for crictl version
	I1212 23:17:03.329866  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.335779  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:03.385099  128156 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:03.385190  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.438085  128156 ssh_runner.go:195] Run: crio --version
	I1212 23:17:03.482280  128156 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 23:17:03.483965  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetIP
	I1212 23:17:03.487086  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487464  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:17:03.487495  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:17:03.487694  128156 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:03.492027  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:03.506463  128156 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:17:03.506503  128156 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:03.544301  128156 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 23:17:03.544329  128156 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 23:17:03.544386  128156 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.544441  128156 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.544474  128156 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.544440  128156 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.544509  128156 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.544527  128156 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.544418  128156 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1212 23:17:03.545656  128156 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.545678  128156 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.545726  128156 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.545657  128156 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.545747  128156 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.545758  128156 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.545662  128156 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1212 23:17:03.546098  128156 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.724978  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.727403  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.739085  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1212 23:17:03.747535  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:03.748286  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:03.780484  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:03.826808  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:03.834529  128156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:03.840840  128156 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I1212 23:17:03.840893  128156 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:03.840940  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:03.868056  128156 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I1212 23:17:03.868106  128156 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:03.868157  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.043948  128156 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I1212 23:17:04.044014  128156 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.044063  128156 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1212 23:17:04.044102  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044167  128156 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1212 23:17:04.044207  128156 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.044252  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044103  128156 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.044334  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044375  128156 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I1212 23:17:04.044401  128156 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.044444  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I1212 23:17:04.044446  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.044489  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I1212 23:17:04.044401  128156 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 23:17:04.044520  128156 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.044545  128156 ssh_runner.go:195] Run: which crictl
	I1212 23:17:04.065308  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1212 23:17:04.065326  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:04.065380  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I1212 23:17:04.065495  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I1212 23:17:04.065541  128156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1212 23:17:04.167939  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.168062  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.207196  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.207344  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:04.261679  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1212 23:17:04.261767  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:04.293250  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 23:17:04.293382  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:04.298843  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.298927  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.298960  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:04.299043  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:04.299066  128156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1212 23:17:04.299125  128156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:04.299187  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299201  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.299219  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I1212 23:17:04.299272  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I1212 23:17:04.302178  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 23:17:04.302502  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1212 23:17:04.311377  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I1212 23:17:04.311421  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1212 23:17:01.950988  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Start
	I1212 23:17:01.951206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring networks are active...
	I1212 23:17:01.952109  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network default is active
	I1212 23:17:01.952459  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Ensuring network mk-default-k8s-diff-port-850839 is active
	I1212 23:17:01.953041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Getting domain xml...
	I1212 23:17:01.953769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Creating domain...
	I1212 23:17:03.377195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting to get IP...
	I1212 23:17:03.378157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378619  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.378696  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.378589  129129 retry.go:31] will retry after 235.08446ms: waiting for machine to come up
	I1212 23:17:03.614763  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615258  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.615288  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.615169  129129 retry.go:31] will retry after 349.415903ms: waiting for machine to come up
	I1212 23:17:03.965990  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966570  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:03.966670  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:03.966628  129129 retry.go:31] will retry after 318.332956ms: waiting for machine to come up
	I1212 23:17:04.286225  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286728  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.286760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.286676  129129 retry.go:31] will retry after 554.258457ms: waiting for machine to come up
	I1212 23:17:04.843362  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843928  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:04.843975  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:04.843882  129129 retry.go:31] will retry after 539.399246ms: waiting for machine to come up
	I1212 23:17:05.384807  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385237  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:05.385267  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:05.385213  129129 retry.go:31] will retry after 793.160743ms: waiting for machine to come up
	I1212 23:17:02.653275  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125123388s)
	I1212 23:17:02.653305  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:02.888884  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.005743  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:03.124339  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:03.124427  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.154719  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:03.679193  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.179381  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.678654  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:04.701429  127900 api_server.go:72] duration metric: took 1.577102613s to wait for apiserver process to appear ...
	I1212 23:17:04.701456  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:04.701476  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:06.586652  128156 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.287578103s)
	I1212 23:17:06.586693  128156 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I1212 23:17:06.586710  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.28741029s)
	I1212 23:17:06.586731  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I1212 23:17:06.586768  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:06.586859  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I1212 23:17:09.053122  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.466228622s)
	I1212 23:17:09.053156  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I1212 23:17:09.053187  128156 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:09.053239  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 23:17:06.180206  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180792  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:06.180826  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:06.180767  129129 retry.go:31] will retry after 1.183884482s: waiting for machine to come up
	I1212 23:17:07.365977  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:07.366537  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:07.366465  129129 retry.go:31] will retry after 1.171346567s: waiting for machine to come up
	I1212 23:17:08.539985  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540457  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:08.540493  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:08.540397  129129 retry.go:31] will retry after 1.176896883s: waiting for machine to come up
	I1212 23:17:09.718657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719110  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:09.719142  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:09.719045  129129 retry.go:31] will retry after 2.075378734s: waiting for machine to come up
	I1212 23:17:09.703531  127900 api_server.go:269] stopped: https://192.168.61.146:8443/healthz: Get "https://192.168.61.146:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1212 23:17:09.703600  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:10.880325  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:10.880391  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:11.380886  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.408357  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.408420  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:11.880867  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:11.888735  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1212 23:17:11.888783  127900 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1212 23:17:12.381393  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:12.390271  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:12.399780  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:12.399818  127900 api_server.go:131] duration metric: took 7.698353874s to wait for apiserver health ...
	I1212 23:17:12.399832  127900 cni.go:84] Creating CNI manager for ""
	I1212 23:17:12.399842  127900 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:12.401614  127900 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:12.403088  127900 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:12.416722  127900 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:12.439451  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:12.452826  127900 system_pods.go:59] 7 kube-system pods found
	I1212 23:17:12.452870  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:12.452879  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:12.452886  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:12.452893  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Pending
	I1212 23:17:12.452901  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:12.452907  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:12.452914  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:12.452924  127900 system_pods.go:74] duration metric: took 13.446573ms to wait for pod list to return data ...
	I1212 23:17:12.452937  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:12.459638  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:12.459679  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:12.459697  127900 node_conditions.go:105] duration metric: took 6.754094ms to run NodePressure ...
	I1212 23:17:12.459722  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:12.767529  127900 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775696  127900 kubeadm.go:787] kubelet initialised
	I1212 23:17:12.775720  127900 kubeadm.go:788] duration metric: took 8.16519ms waiting for restarted kubelet to initialise ...
	I1212 23:17:12.775730  127900 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:12.781477  127900 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.789136  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789163  127900 pod_ready.go:81] duration metric: took 7.661481ms waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.789174  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.789183  127900 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.794618  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794658  127900 pod_ready.go:81] duration metric: took 5.45869ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.794671  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "etcd-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.794689  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.801021  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801052  127900 pod_ready.go:81] duration metric: took 6.346779ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.801065  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.801074  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:12.845211  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845243  127900 pod_ready.go:81] duration metric: took 44.152184ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:12.845256  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:12.845263  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.244325  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244373  127900 pod_ready.go:81] duration metric: took 399.10083ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.244387  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-proxy-b6lz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.244403  127900 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:13.644414  127900 pod_ready.go:97] node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644512  127900 pod_ready.go:81] duration metric: took 400.062676ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:13.644545  127900 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-549640" hosting pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:13.644566  127900 pod_ready.go:38] duration metric: took 868.822745ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:13.644601  127900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:13.674724  127900 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:13.674813  127900 kubeadm.go:640] restartCluster took 22.377904832s
	I1212 23:17:13.674838  127900 kubeadm.go:406] StartCluster complete in 22.437279451s
	I1212 23:17:13.674872  127900 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.674959  127900 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:13.677846  127900 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:13.680423  127900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:13.680690  127900 config.go:182] Loaded profile config "old-k8s-version-549640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1212 23:17:13.680746  127900 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:13.680815  127900 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-549640"
	I1212 23:17:13.680839  127900 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-549640"
	W1212 23:17:13.680850  127900 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:13.680938  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.681342  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.681377  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.681658  127900 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-549640"
	I1212 23:17:13.681702  127900 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-549640"
	W1212 23:17:13.681711  127900 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:13.681780  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.682200  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.682237  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.682462  127900 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-549640"
	I1212 23:17:13.682544  127900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-549640"
	I1212 23:17:13.683062  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.683126  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.702138  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1212 23:17:13.702380  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I1212 23:17:13.702684  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702944  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.702956  127900 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-549640" context rescaled to 1 replicas
	I1212 23:17:13.702990  127900 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:13.704074  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.704211  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.706640  127900 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:13.708293  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:13.706664  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706671  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.706806  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39915
	I1212 23:17:13.709240  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.709383  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.709441  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.709852  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.709874  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.710209  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.710818  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.710867  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.711123  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.711765  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.711842  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.717964  127900 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-549640"
	W1212 23:17:13.717989  127900 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:13.718020  127900 host.go:66] Checking if "old-k8s-version-549640" exists ...
	I1212 23:17:13.718447  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.718493  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.738529  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I1212 23:17:13.739214  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.739827  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.739854  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.740246  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.740847  127900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:13.740917  127900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:13.747710  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I1212 23:17:13.748150  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.748772  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.748793  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.749177  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.749348  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.749413  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I1212 23:17:13.750144  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.751385  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.753201  127900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:13.754814  127900 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:13.754827  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:13.754840  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.754702  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.754893  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.756310  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.756707  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.758906  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.758937  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.758961  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.760001  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.760051  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.760288  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.763360  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.763607  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.770081  127900 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:10.003107  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 23:17:10.003162  128156 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:10.003218  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1212 23:17:12.288548  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.285296733s)
	I1212 23:17:12.288591  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1212 23:17:12.288623  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:12.288674  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I1212 23:17:13.771543  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:13.771565  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:13.769624  127900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I1212 23:17:13.771589  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.772282  127900 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:13.772841  127900 main.go:141] libmachine: Using API Version  1
	I1212 23:17:13.772898  127900 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:13.773284  127900 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:13.773451  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetState
	I1212 23:17:13.775327  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .DriverName
	I1212 23:17:13.775699  127900 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:13.775713  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:13.775738  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHHostname
	I1212 23:17:13.779093  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779539  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.779563  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.779784  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.779957  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.780110  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.780255  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.787297  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.787663  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:8c:5e", ip: ""} in network mk-old-k8s-version-549640: {Iface:virbr3 ExpiryTime:2023-12-13 00:16:32 +0000 UTC Type:0 Mac:52:54:00:e7:8c:5e Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:old-k8s-version-549640 Clientid:01:52:54:00:e7:8c:5e}
	I1212 23:17:13.787729  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | domain old-k8s-version-549640 has defined IP address 192.168.61.146 and MAC address 52:54:00:e7:8c:5e in network mk-old-k8s-version-549640
	I1212 23:17:13.788010  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHPort
	I1212 23:17:13.789645  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHKeyPath
	I1212 23:17:13.789826  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .GetSSHUsername
	I1212 23:17:13.790032  127900 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/old-k8s-version-549640/id_rsa Username:docker}
	I1212 23:17:13.956110  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:13.956139  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:13.974813  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:14.024369  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:14.045961  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:14.045998  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:14.133161  127900 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.133192  127900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:14.342486  127900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:14.827118  127900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.146649731s)
	I1212 23:17:14.827249  127900 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:14.827300  127900 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.118984074s)
	I1212 23:17:14.827324  127900 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:15.050916  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.076057269s)
	I1212 23:17:15.051030  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051049  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.051444  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.051497  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.051508  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.051517  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.051527  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.053501  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.053573  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.053586  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.229413  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.229504  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.229934  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.231467  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.231489  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.522482  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.49806272s)
	I1212 23:17:15.522554  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.522574  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.522920  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.522971  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.522989  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.523009  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.523024  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.523301  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.523322  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558083  127900 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.21554598s)
	I1212 23:17:15.558173  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558200  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.558568  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.558591  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.558603  127900 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:15.558613  127900 main.go:141] libmachine: (old-k8s-version-549640) Calling .Close
	I1212 23:17:15.559348  127900 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:15.559370  127900 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:15.559364  127900 main.go:141] libmachine: (old-k8s-version-549640) DBG | Closing plugin on server side
	I1212 23:17:15.559387  127900 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-549640"
	I1212 23:17:15.562044  127900 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 23:17:11.796385  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796896  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:11.796930  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:11.796831  129129 retry.go:31] will retry after 2.569081306s: waiting for machine to come up
	I1212 23:17:14.369090  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:14.369594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:14.369522  129129 retry.go:31] will retry after 3.566691604s: waiting for machine to come up
	I1212 23:17:15.563724  127900 addons.go:502] enable addons completed in 1.882971652s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 23:17:17.065214  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:15.574585  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.285870336s)
	I1212 23:17:15.574622  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I1212 23:17:15.574667  128156 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:15.574736  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1212 23:17:17.937618  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938021  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:17.938052  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:17.937984  129129 retry.go:31] will retry after 2.790781234s: waiting for machine to come up
	I1212 23:17:20.730659  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731151  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | unable to find current IP address of domain default-k8s-diff-port-850839 in network mk-default-k8s-diff-port-850839
	I1212 23:17:20.731179  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | I1212 23:17:20.731128  129129 retry.go:31] will retry after 5.345575973s: waiting for machine to come up
	I1212 23:17:19.564344  127900 node_ready.go:58] node "old-k8s-version-549640" has status "Ready":"False"
	I1212 23:17:21.564330  127900 node_ready.go:49] node "old-k8s-version-549640" has status "Ready":"True"
	I1212 23:17:21.564356  127900 node_ready.go:38] duration metric: took 6.737022414s waiting for node "old-k8s-version-549640" to be "Ready" ...
	I1212 23:17:21.564367  127900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:21.569573  127900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:19.606668  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.031891087s)
	I1212 23:17:19.606701  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1212 23:17:19.606731  128156 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:19.606791  128156 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I1212 23:17:21.765860  128156 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.159035751s)
	I1212 23:17:21.765896  128156 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17761-76611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I1212 23:17:21.765934  128156 cache_images.go:123] Successfully loaded all cached images
	I1212 23:17:21.765944  128156 cache_images.go:92] LoadImages completed in 18.221602939s
	I1212 23:17:21.766033  128156 ssh_runner.go:195] Run: crio config
	I1212 23:17:21.818966  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:21.818996  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:21.819021  128156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:21.819048  128156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.32 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-115023 NodeName:no-preload-115023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:21.819220  128156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-115023"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:21.819310  128156 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-115023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:21.819369  128156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 23:17:21.829605  128156 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:21.829690  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:21.838518  128156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1212 23:17:21.854214  128156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 23:17:21.869927  128156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1212 23:17:21.886723  128156 ssh_runner.go:195] Run: grep 192.168.72.32	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:21.890481  128156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:21.902964  128156 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023 for IP: 192.168.72.32
	I1212 23:17:21.902993  128156 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:21.903156  128156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:21.903194  128156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:21.903275  128156 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.key
	I1212 23:17:21.903357  128156 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key.9d394d40
	I1212 23:17:21.903393  128156 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key
	I1212 23:17:21.903509  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:21.903540  128156 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:21.903550  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:21.903583  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:21.903623  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:21.903647  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:21.903687  128156 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:21.904310  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:21.928095  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 23:17:21.951412  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:21.974936  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:21.997877  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:22.020598  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:22.042859  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:22.065941  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:22.088688  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:22.110493  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:22.132736  128156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:22.154394  128156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:22.170427  128156 ssh_runner.go:195] Run: openssl version
	I1212 23:17:22.176106  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:22.186617  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191355  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.191423  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:22.196989  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:22.208456  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:22.219355  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224154  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.224224  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:22.230069  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:22.240929  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:22.251836  128156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256441  128156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.256496  128156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:22.261952  128156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:22.272452  128156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:22.277105  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:22.283114  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:22.288860  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:22.294416  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:22.300148  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:22.306380  128156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:22.316419  128156 kubeadm.go:404] StartCluster: {Name:no-preload-115023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-115023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:22.316550  128156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:22.316623  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:22.358616  128156 cri.go:89] found id: ""
	I1212 23:17:22.358703  128156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:22.368800  128156 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:22.368823  128156 kubeadm.go:636] restartCluster start
	I1212 23:17:22.368883  128156 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:22.378570  128156 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.380161  128156 kubeconfig.go:92] found "no-preload-115023" server: "https://192.168.72.32:8443"
	I1212 23:17:22.383451  128156 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:22.392995  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.393064  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.405318  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.405337  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.405370  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.416721  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:22.917468  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:22.917571  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:22.929995  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.417616  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.417752  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.430907  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:23.917522  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:23.917607  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:23.929655  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:24.417316  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.417427  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.429590  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.436348  127760 start.go:369] acquired machines lock for "embed-certs-809120" in 1m2.018372087s
	I1212 23:17:27.436407  127760 start.go:96] Skipping create...Using existing machine configuration
	I1212 23:17:27.436418  127760 fix.go:54] fixHost starting: 
	I1212 23:17:27.436818  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:27.436856  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:27.453079  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I1212 23:17:27.453449  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:27.453967  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:17:27.453999  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:27.454365  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:27.454580  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:27.454743  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:17:27.456367  127760 fix.go:102] recreateIfNeeded on embed-certs-809120: state=Stopped err=<nil>
	I1212 23:17:27.456395  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	W1212 23:17:27.456549  127760 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 23:17:27.458402  127760 out.go:177] * Restarting existing kvm2 VM for "embed-certs-809120" ...
	I1212 23:17:23.588762  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:26.087305  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:27.459818  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Start
	I1212 23:17:27.459994  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring networks are active...
	I1212 23:17:27.460587  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network default is active
	I1212 23:17:27.460997  127760 main.go:141] libmachine: (embed-certs-809120) Ensuring network mk-embed-certs-809120 is active
	I1212 23:17:27.461361  127760 main.go:141] libmachine: (embed-certs-809120) Getting domain xml...
	I1212 23:17:27.462026  127760 main.go:141] libmachine: (embed-certs-809120) Creating domain...
	I1212 23:17:26.081099  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081594  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Found IP for machine: 192.168.39.180
	I1212 23:17:26.081626  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has current primary IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.081637  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserving static IP address...
	I1212 23:17:26.082029  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Reserved static IP address: 192.168.39.180
	I1212 23:17:26.082080  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.082119  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Waiting for SSH to be available...
	I1212 23:17:26.082157  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | skip adding static IP to network mk-default-k8s-diff-port-850839 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850839", mac: "52:54:00:6d:81:5e", ip: "192.168.39.180"}
	I1212 23:17:26.082182  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Getting to WaitForSSH function...
	I1212 23:17:26.084444  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084769  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.084803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.084864  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH client type: external
	I1212 23:17:26.084925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa (-rw-------)
	I1212 23:17:26.084971  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:26.084992  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | About to run SSH command:
	I1212 23:17:26.085006  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | exit 0
	I1212 23:17:26.175122  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:26.175455  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetConfigRaw
	I1212 23:17:26.176092  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.178747  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179016  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.179044  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.179388  128282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/config.json ...
	I1212 23:17:26.179602  128282 machine.go:88] provisioning docker machine ...
	I1212 23:17:26.179624  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:26.179853  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180033  128282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850839"
	I1212 23:17:26.180051  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.180209  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.182470  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.182812  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.182848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.183003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.183193  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183374  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.183538  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.183709  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.184100  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.184115  128282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850839 && echo "default-k8s-diff-port-850839" | sudo tee /etc/hostname
	I1212 23:17:26.313520  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850839
	
	I1212 23:17:26.313562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.316848  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.317633  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.317817  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.318047  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318229  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.318344  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.318567  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.318888  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.318907  128282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850839/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:26.443174  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:26.443206  128282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:26.443224  128282 buildroot.go:174] setting up certificates
	I1212 23:17:26.443255  128282 provision.go:83] configureAuth start
	I1212 23:17:26.443273  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetMachineName
	I1212 23:17:26.443628  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:26.446155  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446467  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.446501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.446568  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.449661  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450005  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.450041  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.450170  128282 provision.go:138] copyHostCerts
	I1212 23:17:26.450235  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:26.450258  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:26.450330  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:26.450442  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:26.450453  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:26.450483  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:26.450555  128282 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:26.450565  128282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:26.450592  128282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:26.450656  128282 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850839 san=[192.168.39.180 192.168.39.180 localhost 127.0.0.1 minikube default-k8s-diff-port-850839]
	I1212 23:17:26.688969  128282 provision.go:172] copyRemoteCerts
	I1212 23:17:26.689035  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:26.689060  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.691731  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692004  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.692033  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.692207  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.692441  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.692607  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.692736  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:26.781407  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:26.804712  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1212 23:17:26.827036  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:26.848977  128282 provision.go:86] duration metric: configureAuth took 405.706324ms
	I1212 23:17:26.849006  128282 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:26.849214  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:26.849310  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:26.851925  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852281  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:26.852314  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:26.852486  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:26.852679  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.852860  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:26.853003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:26.853172  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:26.853688  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:26.853711  128282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:27.183932  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:27.183961  128282 machine.go:91] provisioned docker machine in 1.004345653s
	I1212 23:17:27.183972  128282 start.go:300] post-start starting for "default-k8s-diff-port-850839" (driver="kvm2")
	I1212 23:17:27.183982  128282 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:27.183999  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.184348  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:27.184398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.187375  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187709  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.187759  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.187858  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.188054  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.188248  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.188400  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.277858  128282 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:27.282128  128282 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:27.282157  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:27.282244  128282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:27.282368  128282 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:27.282481  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:27.291755  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:27.313541  128282 start.go:303] post-start completed in 129.554425ms
	I1212 23:17:27.313563  128282 fix.go:56] fixHost completed within 25.388839079s
	I1212 23:17:27.313586  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.316388  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316737  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.316760  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.316934  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.317141  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317343  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.317540  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.317789  128282 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:27.318143  128282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I1212 23:17:27.318158  128282 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:27.436207  128282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423047.383892438
	
	I1212 23:17:27.436230  128282 fix.go:206] guest clock: 1702423047.383892438
	I1212 23:17:27.436237  128282 fix.go:219] Guest: 2023-12-12 23:17:27.383892438 +0000 UTC Remote: 2023-12-12 23:17:27.313567546 +0000 UTC m=+296.357388926 (delta=70.324892ms)
	I1212 23:17:27.436261  128282 fix.go:190] guest clock delta is within tolerance: 70.324892ms
	I1212 23:17:27.436266  128282 start.go:83] releasing machines lock for "default-k8s-diff-port-850839", held for 25.511577503s
	I1212 23:17:27.436289  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.436571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:27.439315  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439697  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.439730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.439891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440396  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440660  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:27.440741  128282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:27.440793  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.440873  128282 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:27.440891  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:27.443558  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443880  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.443938  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.443965  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444132  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444338  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444369  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:27.444398  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.444563  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:27.444741  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.444788  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:27.444907  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:27.445073  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:27.528730  128282 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:27.563590  128282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:27.715220  128282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:27.722775  128282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:27.722883  128282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:27.743217  128282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:27.743264  128282 start.go:475] detecting cgroup driver to use...
	I1212 23:17:27.743344  128282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:27.759125  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:27.772532  128282 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:27.772602  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:27.786439  128282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:27.800413  128282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:27.905626  128282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:28.037279  128282 docker.go:219] disabling docker service ...
	I1212 23:17:28.037362  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:28.050670  128282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:28.063551  128282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:28.195512  128282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:28.306881  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:28.324506  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:28.344908  128282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:28.344992  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.354788  128282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:28.354883  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.364157  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.373415  128282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:28.383391  128282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:28.393203  128282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:28.401935  128282 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:28.402006  128282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:28.413618  128282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:28.426007  128282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:28.536725  128282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:28.711815  128282 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:28.711892  128282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:28.717242  128282 start.go:543] Will wait 60s for crictl version
	I1212 23:17:28.717306  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:17:28.724383  128282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:28.779687  128282 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:28.779781  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.834147  128282 ssh_runner.go:195] Run: crio --version
	I1212 23:17:28.894131  128282 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:24.917347  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:24.917438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:24.928690  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.417259  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.417343  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.428544  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:25.917136  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:25.917212  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:25.927813  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.417826  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.417917  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.428147  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:26.917724  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:26.917803  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:26.929515  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.416997  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.417102  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.428180  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:27.917712  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:27.917830  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:27.931264  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.417370  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.417479  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.432478  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.916907  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:28.917039  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:28.932698  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:29.416883  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.416989  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.434138  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:28.895767  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetIP
	I1212 23:17:28.898899  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899233  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:28.899276  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:28.899500  128282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:28.903950  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:28.917270  128282 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:28.917383  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:28.956752  128282 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:28.956832  128282 ssh_runner.go:195] Run: which lz4
	I1212 23:17:28.961387  128282 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:28.965850  128282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:28.965925  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:30.869493  128282 crio.go:444] Took 1.908152 seconds to copy over tarball
	I1212 23:17:30.869580  128282 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:28.610279  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:31.088625  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:28.873664  127760 main.go:141] libmachine: (embed-certs-809120) Waiting to get IP...
	I1212 23:17:28.874489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:28.874895  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:28.874992  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:28.874848  129329 retry.go:31] will retry after 244.313261ms: waiting for machine to come up
	I1212 23:17:29.120442  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.120959  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.120997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.120852  129329 retry.go:31] will retry after 369.234988ms: waiting for machine to come up
	I1212 23:17:29.491516  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.492081  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.492124  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.492035  129329 retry.go:31] will retry after 448.746179ms: waiting for machine to come up
	I1212 23:17:29.942643  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:29.943286  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:29.943319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:29.943229  129329 retry.go:31] will retry after 520.98965ms: waiting for machine to come up
	I1212 23:17:30.465955  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:30.466468  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:30.466503  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:30.466430  129329 retry.go:31] will retry after 617.123622ms: waiting for machine to come up
	I1212 23:17:31.085159  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.085706  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.085746  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.085665  129329 retry.go:31] will retry after 853.539861ms: waiting for machine to come up
	I1212 23:17:31.940795  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:31.941240  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:31.941265  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:31.941169  129329 retry.go:31] will retry after 960.346145ms: waiting for machine to come up
	I1212 23:17:29.916897  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:29.917007  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:29.932055  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.417555  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.417657  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.433218  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:30.917841  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:30.917967  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:30.933255  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.417271  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.417357  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.429192  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:31.917804  128156 api_server.go:166] Checking apiserver status ...
	I1212 23:17:31.917908  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:31.930333  128156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:32.393106  128156 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:32.393209  128156 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:32.393228  128156 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:32.393315  128156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:32.445688  128156 cri.go:89] found id: ""
	I1212 23:17:32.445774  128156 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:32.462269  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:32.473687  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:32.473768  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483043  128156 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:32.483075  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:32.656758  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.442637  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.666131  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.751061  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:33.855861  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:33.855952  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:33.879438  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.403317  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:34.178083  128282 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.308463726s)
	I1212 23:17:34.178124  128282 crio.go:451] Took 3.308601 seconds to extract the tarball
	I1212 23:17:34.178136  128282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:34.219740  128282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:34.268961  128282 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:34.268987  128282 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:34.269051  128282 ssh_runner.go:195] Run: crio config
	I1212 23:17:34.326979  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:34.327007  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:34.327033  128282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:34.327060  128282 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850839 NodeName:default-k8s-diff-port-850839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:34.327252  128282 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:34.327353  128282 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1212 23:17:34.327425  128282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:34.338300  128282 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:34.338385  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:34.347329  128282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1212 23:17:34.364120  128282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:34.380374  128282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1212 23:17:34.398219  128282 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:34.402134  128282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:34.415197  128282 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839 for IP: 192.168.39.180
	I1212 23:17:34.415252  128282 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:34.415436  128282 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:34.415472  128282 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:34.415540  128282 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.key
	I1212 23:17:34.415593  128282 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key.66237cde
	I1212 23:17:34.415626  128282 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key
	I1212 23:17:34.415739  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:34.415780  128282 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:34.415793  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:34.415841  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:34.415886  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:34.415931  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:34.415990  128282 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:34.416632  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:34.440783  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:34.466303  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:34.491267  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:17:34.516659  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:34.542472  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:34.569367  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:34.599627  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:34.628781  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:34.655361  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:34.681199  128282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:34.706068  128282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:34.724142  128282 ssh_runner.go:195] Run: openssl version
	I1212 23:17:34.730108  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:34.740221  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745118  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.745203  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:34.751091  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:34.761120  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:34.771456  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776480  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.776559  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:34.782833  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:34.793597  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:34.804519  128282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809767  128282 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.809831  128282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:34.815838  128282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:34.825967  128282 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:34.831487  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:34.838280  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:34.845663  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:34.854810  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:34.862962  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:34.869641  128282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:34.876373  128282 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:34.876509  128282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:34.876579  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:34.918413  128282 cri.go:89] found id: ""
	I1212 23:17:34.918486  128282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:34.928267  128282 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:34.928305  128282 kubeadm.go:636] restartCluster start
	I1212 23:17:34.928396  128282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:34.938202  128282 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.939397  128282 kubeconfig.go:92] found "default-k8s-diff-port-850839" server: "https://192.168.39.180:8444"
	I1212 23:17:34.941945  128282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:34.953458  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.953552  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.965537  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:34.965561  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:34.965623  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:34.977454  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.478209  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.478292  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.505825  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:35.978537  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:35.978615  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:35.991422  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:33.591861  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:35.629760  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:32.902889  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:32.903556  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:32.903588  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:32.903500  129329 retry.go:31] will retry after 1.225619987s: waiting for machine to come up
	I1212 23:17:34.130560  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:34.131066  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:34.131098  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:34.131009  129329 retry.go:31] will retry after 1.544530633s: waiting for machine to come up
	I1212 23:17:35.677455  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:35.677916  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:35.677939  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:35.677902  129329 retry.go:31] will retry after 1.740004665s: waiting for machine to come up
	I1212 23:17:37.419743  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:37.420167  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:37.420203  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:37.420121  129329 retry.go:31] will retry after 2.220250897s: waiting for machine to come up
	I1212 23:17:34.902923  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.402835  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:35.903269  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.403728  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:36.903298  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.403775  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:37.903663  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.403892  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:38.429370  128156 api_server.go:72] duration metric: took 4.573508338s to wait for apiserver process to appear ...
	I1212 23:17:38.429402  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:38.429424  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.429952  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.430019  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:38.430455  128156 api_server.go:269] stopped: https://192.168.72.32:8443/healthz: Get "https://192.168.72.32:8443/healthz": dial tcp 192.168.72.32:8443: connect: connection refused
	I1212 23:17:38.931234  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:36.478240  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.478317  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.494437  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:36.978574  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:36.978654  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:36.995711  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.478404  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.478484  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.492356  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:37.977979  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:37.978123  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:37.993637  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.478102  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.478227  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.494347  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.977645  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:38.977771  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:38.994288  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.477795  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.477942  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.495986  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:39.978587  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:39.978695  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:39.994551  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.477958  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.478056  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.492956  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:40.978560  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:40.978663  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:40.994199  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:38.089524  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:40.591793  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:39.643094  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:39.643562  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:39.643603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:39.643508  129329 retry.go:31] will retry after 2.987735855s: waiting for machine to come up
	I1212 23:17:42.633477  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:42.633958  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:42.633993  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:42.633907  129329 retry.go:31] will retry after 3.131576961s: waiting for machine to come up
	I1212 23:17:41.334632  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:41.334685  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:41.334703  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.392719  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.392768  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.431413  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.445393  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.445428  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:41.930605  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:41.935880  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:41.935918  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.430551  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.435690  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:42.435720  128156 api_server.go:103] status: https://192.168.72.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:42.931341  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:17:42.936295  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:17:42.944125  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:17:42.944163  128156 api_server.go:131] duration metric: took 4.514753942s to wait for apiserver health ...
	I1212 23:17:42.944173  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:17:42.944179  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:42.945951  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:42.947258  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:42.957745  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:42.978269  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:42.990231  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:42.990267  128156 system_pods.go:61] "coredns-76f75df574-2rdhr" [266c2440-a927-476c-b918-d0712834fc2f] Running
	I1212 23:17:42.990274  128156 system_pods.go:61] "etcd-no-preload-115023" [522ee237-12e0-4b83-9e20-05713cd87c7d] Running
	I1212 23:17:42.990281  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [9048886a-1b8b-407d-bd71-c5a850d88a5f] Running
	I1212 23:17:42.990287  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [4652e03f-2622-41d8-8791-bcc648d43432] Running
	I1212 23:17:42.990292  128156 system_pods.go:61] "kube-proxy-rqhmc" [b7514603-3389-4a38-b24a-e9c7948364bc] Running
	I1212 23:17:42.990299  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [7ce16391-9627-454b-b0de-27af47921997] Running
	I1212 23:17:42.990308  128156 system_pods.go:61] "metrics-server-57f55c9bc5-b42rv" [f27bd873-340b-4ae1-922a-ed8f52d558dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:42.990316  128156 system_pods.go:61] "storage-provisioner" [d9565f7f-dcf4-4e4d-88fd-e8a54bbf0e40] Running
	I1212 23:17:42.990327  128156 system_pods.go:74] duration metric: took 12.031472ms to wait for pod list to return data ...
	I1212 23:17:42.990347  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:42.994787  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:42.994817  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:42.994827  128156 node_conditions.go:105] duration metric: took 4.471497ms to run NodePressure ...
	I1212 23:17:42.994844  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.281299  128156 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:43.286299  128156 retry.go:31] will retry after 184.15509ms: kubelet not initialised
	I1212 23:17:43.476354  128156 retry.go:31] will retry after 533.806598ms: kubelet not initialised
	I1212 23:17:44.036349  128156 retry.go:31] will retry after 483.473669ms: kubelet not initialised
	I1212 23:17:41.477798  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.477898  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.493963  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:41.977991  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:41.978077  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:41.994590  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.478242  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.478334  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.495374  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:42.978495  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:42.978597  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:42.992337  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.477604  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.477667  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.491061  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:43.977638  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:43.977754  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:43.991654  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.478308  128282 api_server.go:166] Checking apiserver status ...
	I1212 23:17:44.478409  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:44.494965  128282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:44.953708  128282 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:17:44.953763  128282 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:17:44.953780  128282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:17:44.953874  128282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:45.003440  128282 cri.go:89] found id: ""
	I1212 23:17:45.003519  128282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:17:45.021471  128282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:17:45.036134  128282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:17:45.036203  128282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049188  128282 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:17:45.049214  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.197549  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:45.958707  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:43.088583  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.587947  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:47.588918  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:45.768814  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:45.769238  127760 main.go:141] libmachine: (embed-certs-809120) DBG | unable to find current IP address of domain embed-certs-809120 in network mk-embed-certs-809120
	I1212 23:17:45.769270  127760 main.go:141] libmachine: (embed-certs-809120) DBG | I1212 23:17:45.769171  129329 retry.go:31] will retry after 3.722952815s: waiting for machine to come up
	I1212 23:17:44.529285  128156 kubeadm.go:787] kubelet initialised
	I1212 23:17:44.529310  128156 kubeadm.go:788] duration metric: took 1.247981757s waiting for restarted kubelet to initialise ...
	I1212 23:17:44.529321  128156 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:44.551751  128156 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:46.588427  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:48.589582  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:46.161702  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.251040  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:46.344286  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:46.344385  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.359646  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:46.875339  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.375793  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:47.875532  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.375394  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.875412  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:48.903144  128282 api_server.go:72] duration metric: took 2.558861066s to wait for apiserver process to appear ...
	I1212 23:17:48.903170  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:48.903188  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.903660  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:48.903697  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:48.904122  128282 api_server.go:269] stopped: https://192.168.39.180:8444/healthz: Get "https://192.168.39.180:8444/healthz": dial tcp 192.168.39.180:8444: connect: connection refused
	I1212 23:17:49.404880  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:50.088813  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.089208  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:49.494927  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495446  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has current primary IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.495474  127760 main.go:141] libmachine: (embed-certs-809120) Found IP for machine: 192.168.50.221
	I1212 23:17:49.495489  127760 main.go:141] libmachine: (embed-certs-809120) Reserving static IP address...
	I1212 23:17:49.495884  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.495933  127760 main.go:141] libmachine: (embed-certs-809120) DBG | skip adding static IP to network mk-embed-certs-809120 - found existing host DHCP lease matching {name: "embed-certs-809120", mac: "52:54:00:1c:a9:e8", ip: "192.168.50.221"}
	I1212 23:17:49.495954  127760 main.go:141] libmachine: (embed-certs-809120) Reserved static IP address: 192.168.50.221
	I1212 23:17:49.495971  127760 main.go:141] libmachine: (embed-certs-809120) Waiting for SSH to be available...
	I1212 23:17:49.495987  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Getting to WaitForSSH function...
	I1212 23:17:49.498007  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498362  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.498398  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.498514  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH client type: external
	I1212 23:17:49.498545  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa (-rw-------)
	I1212 23:17:49.498583  127760 main.go:141] libmachine: (embed-certs-809120) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:17:49.498598  127760 main.go:141] libmachine: (embed-certs-809120) DBG | About to run SSH command:
	I1212 23:17:49.498615  127760 main.go:141] libmachine: (embed-certs-809120) DBG | exit 0
	I1212 23:17:49.635655  127760 main.go:141] libmachine: (embed-certs-809120) DBG | SSH cmd err, output: <nil>: 
	I1212 23:17:49.636039  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetConfigRaw
	I1212 23:17:49.636795  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.639601  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640032  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.640059  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.640367  127760 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/config.json ...
	I1212 23:17:49.640604  127760 machine.go:88] provisioning docker machine ...
	I1212 23:17:49.640629  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:49.640901  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641044  127760 buildroot.go:166] provisioning hostname "embed-certs-809120"
	I1212 23:17:49.641066  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.641184  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.643599  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644050  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.644082  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.644210  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.644439  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644612  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.644791  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.644961  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.645333  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.645350  127760 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-809120 && echo "embed-certs-809120" | sudo tee /etc/hostname
	I1212 23:17:49.779263  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-809120
	
	I1212 23:17:49.779298  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.782329  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782739  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.782772  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.782891  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:49.783133  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783306  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:49.783466  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:49.783641  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:49.784029  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:49.784055  127760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-809120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-809120/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-809120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:17:49.914603  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:17:49.914641  127760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:17:49.914673  127760 buildroot.go:174] setting up certificates
	I1212 23:17:49.914686  127760 provision.go:83] configureAuth start
	I1212 23:17:49.914704  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetMachineName
	I1212 23:17:49.915021  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:49.918281  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918661  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.918715  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.918849  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:49.921184  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921566  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:49.921603  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:49.921732  127760 provision.go:138] copyHostCerts
	I1212 23:17:49.921811  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:17:49.921824  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:17:49.921891  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:17:49.922013  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:17:49.922030  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:17:49.922061  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:17:49.922139  127760 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:17:49.922149  127760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:17:49.922174  127760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:17:49.922255  127760 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-809120 san=[192.168.50.221 192.168.50.221 localhost 127.0.0.1 minikube embed-certs-809120]
	I1212 23:17:50.309293  127760 provision.go:172] copyRemoteCerts
	I1212 23:17:50.309361  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:17:50.309389  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.312319  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.312745  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.312942  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.313157  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.313362  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.313554  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.401075  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:17:50.426930  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 23:17:50.452785  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 23:17:50.480062  127760 provision.go:86] duration metric: configureAuth took 565.356144ms
	I1212 23:17:50.480098  127760 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:17:50.480377  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:17:50.480523  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.483621  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484035  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.484091  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.484244  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.484455  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484603  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.484728  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.484903  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.485264  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.485282  127760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:17:50.842779  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:17:50.842815  127760 machine.go:91] provisioned docker machine in 1.202192917s
	I1212 23:17:50.842831  127760 start.go:300] post-start starting for "embed-certs-809120" (driver="kvm2")
	I1212 23:17:50.842846  127760 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:17:50.842882  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:50.843282  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:17:50.843318  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.846267  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846670  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.846704  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.846881  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.847102  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.847322  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.847496  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:50.934904  127760 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:17:50.939875  127760 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:17:50.939912  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:17:50.940000  127760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:17:50.940130  127760 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:17:50.940242  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:17:50.950764  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:50.977204  127760 start.go:303] post-start completed in 134.34972ms
	I1212 23:17:50.977232  127760 fix.go:56] fixHost completed within 23.540815255s
	I1212 23:17:50.977256  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:50.980553  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981029  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:50.981065  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:50.981350  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:50.981611  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981766  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:50.981917  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:50.982111  127760 main.go:141] libmachine: Using SSH client type: native
	I1212 23:17:50.982448  127760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I1212 23:17:50.982467  127760 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:17:51.096273  127760 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702423071.035304579
	
	I1212 23:17:51.096303  127760 fix.go:206] guest clock: 1702423071.035304579
	I1212 23:17:51.096311  127760 fix.go:219] Guest: 2023-12-12 23:17:51.035304579 +0000 UTC Remote: 2023-12-12 23:17:50.977236465 +0000 UTC m=+368.149225502 (delta=58.068114ms)
	I1212 23:17:51.096365  127760 fix.go:190] guest clock delta is within tolerance: 58.068114ms
	I1212 23:17:51.096375  127760 start.go:83] releasing machines lock for "embed-certs-809120", held for 23.659994787s
	I1212 23:17:51.096401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.096676  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:51.099275  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099683  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.099714  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.099864  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100401  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100586  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:17:51.100671  127760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:17:51.100713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.100833  127760 ssh_runner.go:195] Run: cat /version.json
	I1212 23:17:51.100859  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:17:51.103808  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104103  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104214  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104268  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104379  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:51.104415  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104405  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:51.104615  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:17:51.104620  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104817  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:17:51.104838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.104999  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.105058  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:17:51.105220  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:17:51.214734  127760 ssh_runner.go:195] Run: systemctl --version
	I1212 23:17:51.221556  127760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:17:51.379699  127760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:17:51.386319  127760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:17:51.386411  127760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:17:51.406594  127760 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:17:51.406623  127760 start.go:475] detecting cgroup driver to use...
	I1212 23:17:51.406707  127760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:17:51.421646  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:17:51.439574  127760 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:17:51.439651  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:17:51.456389  127760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:17:51.469380  127760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:17:51.576093  127760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:17:51.711468  127760 docker.go:219] disabling docker service ...
	I1212 23:17:51.711548  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:17:51.726747  127760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:17:51.739661  127760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:17:51.852974  127760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:17:51.973603  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:17:51.986983  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:17:52.004739  127760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:17:52.004809  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.017255  127760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:17:52.017345  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.029275  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.040398  127760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:17:52.051671  127760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:17:52.062036  127760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:17:52.070879  127760 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:17:52.070958  127760 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:17:52.087878  127760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:17:52.099487  127760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:17:52.246621  127760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:17:52.445182  127760 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:17:52.445259  127760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:17:52.450378  127760 start.go:543] Will wait 60s for crictl version
	I1212 23:17:52.450458  127760 ssh_runner.go:195] Run: which crictl
	I1212 23:17:52.454778  127760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:17:52.497569  127760 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:17:52.497679  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.562042  127760 ssh_runner.go:195] Run: crio --version
	I1212 23:17:52.622289  127760 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1212 23:17:52.623892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetIP
	I1212 23:17:52.626997  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627438  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:17:52.627474  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:17:52.627731  127760 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 23:17:52.633387  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:52.647682  127760 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 23:17:52.647763  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:52.691061  127760 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1212 23:17:52.691138  127760 ssh_runner.go:195] Run: which lz4
	I1212 23:17:52.695575  127760 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:17:52.701228  127760 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:17:52.701265  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1212 23:17:53.042479  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.042516  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.042532  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.134475  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:17:53.134511  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:17:53.404943  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.413791  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.413829  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:53.904341  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:53.916515  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:17:53.916564  128282 api_server.go:103] status: https://192.168.39.180:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:17:54.404229  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:17:54.414091  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:17:54.428577  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:17:54.428615  128282 api_server.go:131] duration metric: took 5.525437271s to wait for apiserver health ...
	I1212 23:17:54.428628  128282 cni.go:84] Creating CNI manager for ""
	I1212 23:17:54.428638  128282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:54.430838  128282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:17:50.589742  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:52.593395  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:54.432405  128282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:17:54.450147  128282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:17:54.496866  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:54.519276  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:54.519327  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:17:54.519339  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:17:54.519354  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:17:54.519405  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:17:54.519418  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:17:54.519434  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:17:54.519447  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:54.519484  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:17:54.519498  128282 system_pods.go:74] duration metric: took 22.603103ms to wait for pod list to return data ...
	I1212 23:17:54.519512  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:54.526046  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:54.526083  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:54.526098  128282 node_conditions.go:105] duration metric: took 6.575834ms to run NodePressure ...
	I1212 23:17:54.526127  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:17:54.979886  128282 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991132  128282 kubeadm.go:787] kubelet initialised
	I1212 23:17:54.991169  128282 kubeadm.go:788] duration metric: took 11.248765ms waiting for restarted kubelet to initialise ...
	I1212 23:17:54.991185  128282 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:54.999550  128282 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.008465  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008494  128282 pod_ready.go:81] duration metric: took 8.904589ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.008508  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.008516  128282 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.020120  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020152  128282 pod_ready.go:81] duration metric: took 11.625987ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.020164  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.020191  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.030018  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030056  128282 pod_ready.go:81] duration metric: took 9.856172ms waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.030074  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.030083  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.039957  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.039997  128282 pod_ready.go:81] duration metric: took 9.902972ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.040015  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.040025  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.384922  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384964  128282 pod_ready.go:81] duration metric: took 344.925878ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.384979  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-proxy-wjrjj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.384988  128282 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.791268  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791307  128282 pod_ready.go:81] duration metric: took 406.306307ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:55.791323  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:55.791335  128282 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:56.186386  128282 pod_ready.go:97] node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186484  128282 pod_ready.go:81] duration metric: took 395.136012ms waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:17:56.186514  128282 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-850839" hosting pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:56.186553  128282 pod_ready.go:38] duration metric: took 1.195355612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:56.186577  128282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:17:56.201434  128282 ops.go:34] apiserver oom_adj: -16
	I1212 23:17:56.201462  128282 kubeadm.go:640] restartCluster took 21.273148264s
	I1212 23:17:56.201473  128282 kubeadm.go:406] StartCluster complete in 21.325115034s
	I1212 23:17:56.201496  128282 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.201592  128282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:17:56.204683  128282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:56.205095  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:17:56.205222  128282 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:17:56.205300  128282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205321  128282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205330  128282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850839"
	I1212 23:17:56.205346  128282 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205361  128282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850839"
	W1212 23:17:56.205363  128282 addons.go:240] addon metrics-server should already be in state true
	I1212 23:17:56.205324  128282 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-850839"
	I1212 23:17:56.205445  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205360  128282 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 23:17:56.205501  128282 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:17:56.205595  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.205832  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205855  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205918  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.205939  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.205978  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.206077  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.215695  128282 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850839" context rescaled to 1 replicas
	I1212 23:17:56.215745  128282 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:17:56.219003  128282 out.go:177] * Verifying Kubernetes components...
	I1212 23:17:56.221363  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.223684  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I1212 23:17:56.223901  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I1212 23:17:56.224018  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I1212 23:17:56.224530  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.224610  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225015  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.225250  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.225260  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.225597  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.225990  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.226015  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.226308  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.226318  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.227368  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.227535  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.229799  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.229817  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.230427  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.232575  128282 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-850839"
	W1212 23:17:56.232593  128282 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:17:56.232623  128282 host.go:66] Checking if "default-k8s-diff-port-850839" exists ...
	I1212 23:17:56.233075  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233110  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.233880  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.233930  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.245636  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1212 23:17:56.246119  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.246606  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.246623  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.246950  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.247098  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.248959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.251159  128282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:17:56.249918  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I1212 23:17:56.251294  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I1212 23:17:56.252768  128282 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.252783  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:17:56.252798  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.253647  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.253753  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.254219  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254233  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254340  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.254347  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.254690  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254749  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.254959  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.255310  128282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:17:56.255335  128282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:17:56.256017  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256612  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.256639  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.256730  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.257003  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.257189  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.257402  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.258242  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.260097  128282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:17:54.115994  127900 pod_ready.go:102] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:55.606824  127900 pod_ready.go:92] pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.606858  127900 pod_ready.go:81] duration metric: took 34.03725266s waiting for pod "coredns-5644d7b6d9-4698s" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.606872  127900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619163  127900 pod_ready.go:92] pod "etcd-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.619197  127900 pod_ready.go:81] duration metric: took 12.316097ms waiting for pod "etcd-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.619212  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627282  127900 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.627313  127900 pod_ready.go:81] duration metric: took 8.08913ms waiting for pod "kube-apiserver-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.627328  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634928  127900 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.634962  127900 pod_ready.go:81] duration metric: took 7.625067ms waiting for pod "kube-controller-manager-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.634978  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644531  127900 pod_ready.go:92] pod "kube-proxy-b6lz6" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.644558  127900 pod_ready.go:81] duration metric: took 9.571853ms waiting for pod "kube-proxy-b6lz6" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.644572  127900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985318  127900 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace has status "Ready":"True"
	I1212 23:17:55.985350  127900 pod_ready.go:81] duration metric: took 340.769789ms waiting for pod "kube-scheduler-old-k8s-version-549640" in "kube-system" namespace to be "Ready" ...
	I1212 23:17:55.985366  127900 pod_ready.go:38] duration metric: took 34.420989087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:17:55.985382  127900 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:17:55.985443  127900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:17:56.008913  127900 api_server.go:72] duration metric: took 42.305439195s to wait for apiserver process to appear ...
	I1212 23:17:56.009000  127900 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:17:56.009030  127900 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I1212 23:17:56.017005  127900 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I1212 23:17:56.018170  127900 api_server.go:141] control plane version: v1.16.0
	I1212 23:17:56.018198  127900 api_server.go:131] duration metric: took 9.18267ms to wait for apiserver health ...
	I1212 23:17:56.018209  127900 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:17:56.189360  127900 system_pods.go:59] 8 kube-system pods found
	I1212 23:17:56.189394  127900 system_pods.go:61] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.189401  127900 system_pods.go:61] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.189408  127900 system_pods.go:61] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.189415  127900 system_pods.go:61] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.189421  127900 system_pods.go:61] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.189428  127900 system_pods.go:61] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.189437  127900 system_pods.go:61] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.189447  127900 system_pods.go:61] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.189462  127900 system_pods.go:74] duration metric: took 171.24435ms to wait for pod list to return data ...
	I1212 23:17:56.189477  127900 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:17:56.386180  127900 default_sa.go:45] found service account: "default"
	I1212 23:17:56.386211  127900 default_sa.go:55] duration metric: took 196.72345ms for default service account to be created ...
	I1212 23:17:56.386223  127900 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:17:56.591313  127900 system_pods.go:86] 8 kube-system pods found
	I1212 23:17:56.591345  127900 system_pods.go:89] "coredns-5644d7b6d9-4698s" [bf3181b9-bbf8-431d-9b2f-45daee2289f1] Running
	I1212 23:17:56.591354  127900 system_pods.go:89] "etcd-old-k8s-version-549640" [75a26012-dc0d-40f1-8565-9e9c8da837e4] Running
	I1212 23:17:56.591361  127900 system_pods.go:89] "kube-apiserver-old-k8s-version-549640" [17e47a08-37e0-4829-95a5-c371adbf974f] Running
	I1212 23:17:56.591369  127900 system_pods.go:89] "kube-controller-manager-old-k8s-version-549640" [0313d511-851e-4932-9a7c-90d0627e5efc] Running
	I1212 23:17:56.591375  127900 system_pods.go:89] "kube-proxy-b6lz6" [4ec8ee19-e734-4792-82be-3765afc63a12] Running
	I1212 23:17:56.591382  127900 system_pods.go:89] "kube-scheduler-old-k8s-version-549640" [852bea9e-e24c-4d81-abf1-a4e9629d0654] Running
	I1212 23:17:56.591393  127900 system_pods.go:89] "metrics-server-74d5856cc6-hsjtz" [0cb2ae7e-8232-4802-8552-0088be4ae16b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:17:56.591401  127900 system_pods.go:89] "storage-provisioner" [a525a632-2304-4070-83a1-0d4a0a995d2d] Running
	I1212 23:17:56.591414  127900 system_pods.go:126] duration metric: took 205.183283ms to wait for k8s-apps to be running ...
	I1212 23:17:56.591429  127900 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:17:56.591482  127900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:17:56.611938  127900 system_svc.go:56] duration metric: took 20.493956ms WaitForService to wait for kubelet.
	I1212 23:17:56.611982  127900 kubeadm.go:581] duration metric: took 42.908516938s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:17:56.612014  127900 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:17:56.785799  127900 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:17:56.785841  127900 node_conditions.go:123] node cpu capacity is 2
	I1212 23:17:56.785856  127900 node_conditions.go:105] duration metric: took 173.834506ms to run NodePressure ...
	I1212 23:17:56.785874  127900 start.go:228] waiting for startup goroutines ...
	I1212 23:17:56.785883  127900 start.go:233] waiting for cluster config update ...
	I1212 23:17:56.785898  127900 start.go:242] writing updated cluster config ...
	I1212 23:17:56.786402  127900 ssh_runner.go:195] Run: rm -f paused
	I1212 23:17:56.860461  127900 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1212 23:17:56.862646  127900 out.go:177] 
	W1212 23:17:56.864213  127900 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1212 23:17:56.865656  127900 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1212 23:17:56.867482  127900 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-549640" cluster and "default" namespace by default
	I1212 23:17:54.719978  127760 crio.go:444] Took 2.024442 seconds to copy over tarball
	I1212 23:17:54.720063  127760 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:17:56.261553  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:17:56.261577  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:17:56.261599  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.269093  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269478  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.269501  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.269778  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.269969  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.270192  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.270348  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.273173  128282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I1212 23:17:56.273551  128282 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:17:56.274146  128282 main.go:141] libmachine: Using API Version  1
	I1212 23:17:56.274170  128282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:17:56.274479  128282 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:17:56.274657  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetState
	I1212 23:17:56.276224  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .DriverName
	I1212 23:17:56.276536  128282 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.276553  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:17:56.276572  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHHostname
	I1212 23:17:56.279571  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.279991  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:81:5e", ip: ""} in network mk-default-k8s-diff-port-850839: {Iface:virbr1 ExpiryTime:2023-12-13 00:17:15 +0000 UTC Type:0 Mac:52:54:00:6d:81:5e Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:default-k8s-diff-port-850839 Clientid:01:52:54:00:6d:81:5e}
	I1212 23:17:56.280030  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | domain default-k8s-diff-port-850839 has defined IP address 192.168.39.180 and MAC address 52:54:00:6d:81:5e in network mk-default-k8s-diff-port-850839
	I1212 23:17:56.280183  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHPort
	I1212 23:17:56.280395  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHKeyPath
	I1212 23:17:56.280562  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .GetSSHUsername
	I1212 23:17:56.280708  128282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/default-k8s-diff-port-850839/id_rsa Username:docker}
	I1212 23:17:56.399444  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:17:56.447026  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:17:56.447058  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:17:56.453920  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:17:56.474280  128282 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 23:17:56.474316  128282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:17:56.509564  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:17:56.509598  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:17:56.575180  128282 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:56.575217  128282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:17:56.641478  128282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:17:58.298873  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.89938362s)
	I1212 23:17:58.298942  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.298948  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.844991558s)
	I1212 23:17:58.298957  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.298986  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299063  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299326  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299356  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299367  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299387  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299439  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299448  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299463  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299479  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.299489  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.299673  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299690  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.299850  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.299879  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.299899  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.308876  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.308905  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.309195  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.309232  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.309241  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.418788  128282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.777244462s)
	I1212 23:17:58.418849  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.418866  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.419251  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.419285  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.419297  128282 main.go:141] libmachine: Making call to close driver server
	I1212 23:17:58.419308  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) Calling .Close
	I1212 23:17:58.420803  128282 main.go:141] libmachine: (default-k8s-diff-port-850839) DBG | Closing plugin on server side
	I1212 23:17:58.420837  128282 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:17:58.420857  128282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:17:58.420876  128282 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-850839"
	I1212 23:17:58.591048  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:17:58.635345  128282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:17:54.595102  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:57.089235  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:17:58.815643  128282 addons.go:502] enable addons completed in 2.610454188s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:17:58.247448  127760 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.527350021s)
	I1212 23:17:58.247482  127760 crio.go:451] Took 3.527472 seconds to extract the tarball
	I1212 23:17:58.247500  127760 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:17:58.292239  127760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:17:58.347669  127760 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:17:58.347700  127760 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:17:58.347774  127760 ssh_runner.go:195] Run: crio config
	I1212 23:17:58.410577  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:17:58.410604  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:17:58.410627  127760 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 23:17:58.410658  127760 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-809120 NodeName:embed-certs-809120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:17:58.410874  127760 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-809120"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:17:58.410973  127760 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-809120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:17:58.411040  127760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 23:17:58.422571  127760 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:17:58.422655  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:17:58.432833  127760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1212 23:17:58.449996  127760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 23:17:58.468807  127760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1212 23:17:58.487568  127760 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I1212 23:17:58.492547  127760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:17:58.505497  127760 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120 for IP: 192.168.50.221
	I1212 23:17:58.505548  127760 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:17:58.505759  127760 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:17:58.505820  127760 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:17:58.505891  127760 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/client.key
	I1212 23:17:58.585996  127760 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key.edab0817
	I1212 23:17:58.586114  127760 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key
	I1212 23:17:58.586288  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:17:58.586319  127760 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:17:58.586330  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:17:58.586356  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:17:58.586381  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:17:58.586418  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:17:58.586483  127760 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:17:58.587254  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:17:58.615215  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:17:58.644237  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:17:58.670345  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/embed-certs-809120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 23:17:58.694986  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:17:58.719944  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:17:58.744701  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:17:58.768614  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:17:58.792922  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:17:58.815723  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:17:58.840192  127760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:17:58.864277  127760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:17:58.883069  127760 ssh_runner.go:195] Run: openssl version
	I1212 23:17:58.889642  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:17:58.901893  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906910  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.906964  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:17:58.912769  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:17:58.924171  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:17:58.937368  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942604  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.942681  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:17:58.948759  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:17:58.959757  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:17:58.971091  127760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976035  127760 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.976105  127760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:17:58.982246  127760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:17:58.994786  127760 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:17:58.999625  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 23:17:59.006233  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 23:17:59.012668  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 23:17:59.018959  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 23:17:59.025039  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 23:17:59.031628  127760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 23:17:59.037633  127760 kubeadm.go:404] StartCluster: {Name:embed-certs-809120 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-809120 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:17:59.037779  127760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:17:59.037837  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:17:59.078977  127760 cri.go:89] found id: ""
	I1212 23:17:59.079065  127760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:17:59.090869  127760 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 23:17:59.090893  127760 kubeadm.go:636] restartCluster start
	I1212 23:17:59.090957  127760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 23:17:59.101950  127760 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.103088  127760 kubeconfig.go:92] found "embed-certs-809120" server: "https://192.168.50.221:8443"
	I1212 23:17:59.105562  127760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 23:17:59.115942  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.116006  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.128428  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.128452  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.128508  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.141075  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.641778  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:17:59.641854  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:17:59.654519  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.142171  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.142275  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.157160  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:00.641601  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:00.641719  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:00.654666  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.141184  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.141289  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.154899  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:01.641381  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:01.641501  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:01.654663  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.141186  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.141311  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.154140  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:02.642051  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:02.642157  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:02.655013  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:17:59.586733  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.588383  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:03.588956  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:01.092631  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:03.591508  128282 node_ready.go:58] node "default-k8s-diff-port-850839" has status "Ready":"False"
	I1212 23:18:04.090728  128282 node_ready.go:49] node "default-k8s-diff-port-850839" has status "Ready":"True"
	I1212 23:18:04.090757  128282 node_ready.go:38] duration metric: took 7.616412902s waiting for node "default-k8s-diff-port-850839" to be "Ready" ...
	I1212 23:18:04.090775  128282 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:04.099347  128282 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107155  128282 pod_ready.go:92] pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.107180  128282 pod_ready.go:81] duration metric: took 7.807715ms waiting for pod "coredns-5dd5756b68-nrpzf" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.107192  128282 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113524  128282 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:04.113547  128282 pod_ready.go:81] duration metric: took 6.348789ms waiting for pod "etcd-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:04.113557  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:03.141560  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.141654  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.156399  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:03.642066  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:03.642159  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:03.657347  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.141755  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.141837  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.158471  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:04.641645  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:04.641754  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:04.655061  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.141603  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.141699  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.154832  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.641246  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:05.641321  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:05.658753  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.141224  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.141299  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.156055  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:06.641506  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:06.641593  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:06.654083  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.141490  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.141570  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.154699  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:07.641257  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:07.641336  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:07.653935  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:05.590423  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.088212  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:06.134727  128282 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:07.136828  128282 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.136854  128282 pod_ready.go:81] duration metric: took 3.023290043s waiting for pod "kube-apiserver-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.136866  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151525  128282 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.151554  128282 pod_ready.go:81] duration metric: took 14.680003ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.151570  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293823  128282 pod_ready.go:92] pod "kube-proxy-wjrjj" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.293853  128282 pod_ready.go:81] duration metric: took 142.276185ms waiting for pod "kube-proxy-wjrjj" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.293864  128282 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690262  128282 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:07.690291  128282 pod_ready.go:81] duration metric: took 396.420266ms waiting for pod "kube-scheduler-default-k8s-diff-port-850839" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:07.690311  128282 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:10.001790  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:08.141984  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.142065  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.154365  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:08.641957  127760 api_server.go:166] Checking apiserver status ...
	I1212 23:18:08.642070  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 23:18:08.654449  127760 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 23:18:09.117052  127760 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 23:18:09.117093  127760 kubeadm.go:1135] stopping kube-system containers ...
	I1212 23:18:09.117131  127760 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 23:18:09.117195  127760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:18:09.165861  127760 cri.go:89] found id: ""
	I1212 23:18:09.165944  127760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 23:18:09.183729  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:18:09.194407  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:18:09.194487  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204575  127760 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 23:18:09.204609  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:09.333758  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.380332  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04653446s)
	I1212 23:18:10.380364  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.603185  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.692919  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:10.776099  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:18:10.776189  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.795881  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.310083  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:11.809948  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.309977  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:12.810420  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:10.089789  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.589345  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:12.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:14.002715  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:13.310509  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:18:13.336361  127760 api_server.go:72] duration metric: took 2.560264825s to wait for apiserver process to appear ...
	I1212 23:18:13.336391  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:18:13.336411  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.319120  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.319159  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.319177  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.400337  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 23:18:17.400373  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 23:18:17.900625  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:17.906178  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:17.906233  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.401353  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.407217  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 23:18:18.407262  127760 api_server.go:103] status: https://192.168.50.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 23:18:18.901435  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:18:18.913756  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:18:18.922517  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:18:18.922545  127760 api_server.go:131] duration metric: took 5.586147801s to wait for apiserver health ...
	I1212 23:18:18.922556  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:18:18.922563  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:18:18.924845  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:18:15.088538  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:17.587744  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:16.503957  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.002214  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:18.926570  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:18:18.976384  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:18:19.009915  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:18:19.035935  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:18:19.035986  127760 system_pods.go:61] "coredns-5dd5756b68-bz6cz" [4f53d6a6-c877-4f76-8aca-06ee891d9652] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 23:18:19.035996  127760 system_pods.go:61] "etcd-embed-certs-809120" [260387de-7507-4962-b2fd-90cd6b39cae8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 23:18:19.036005  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [94ded414-9813-4d0e-8de4-8ad5f6c16a33] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 23:18:19.036017  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [c6574dde-8281-4dd2-bacd-c0412f1f592c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 23:18:19.036028  127760 system_pods.go:61] "kube-proxy-h7zgl" [87ca2a99-1da7-4a50-b4c7-f160cddf9ff3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 23:18:19.036042  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [fc6d3a5c-4056-47f8-9156-f5d370ba1de6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 23:18:19.036053  127760 system_pods.go:61] "metrics-server-57f55c9bc5-mxsd2" [d519663c-7921-4fc9-8d0f-ecf6d3cdbd02] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:18:19.036071  127760 system_pods.go:61] "storage-provisioner" [900e5cb9-7d27-4446-b15d-21f67fa3b629] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 23:18:19.036081  127760 system_pods.go:74] duration metric: took 26.13268ms to wait for pod list to return data ...
	I1212 23:18:19.036093  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:18:19.045885  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:18:19.045930  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:18:19.045945  127760 node_conditions.go:105] duration metric: took 9.842707ms to run NodePressure ...
	I1212 23:18:19.045969  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 23:18:19.587096  127760 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593698  127760 kubeadm.go:787] kubelet initialised
	I1212 23:18:19.593722  127760 kubeadm.go:788] duration metric: took 6.595854ms waiting for restarted kubelet to initialise ...
	I1212 23:18:19.593730  127760 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:18:19.602567  127760 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:21.623798  127760 pod_ready.go:102] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:19.590788  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:22.089448  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:24.090497  128156 pod_ready.go:102] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:21.501964  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.502814  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:26.000629  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:23.124864  127760 pod_ready.go:92] pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:23.124888  127760 pod_ready.go:81] duration metric: took 3.52228673s waiting for pod "coredns-5dd5756b68-bz6cz" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:23.124898  127760 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:25.143967  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.146069  127760 pod_ready.go:102] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:27.645645  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.645671  127760 pod_ready.go:81] duration metric: took 4.520766787s waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.645686  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652369  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:27.652392  127760 pod_ready.go:81] duration metric: took 6.700076ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.652402  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587478  128156 pod_ready.go:92] pod "coredns-76f75df574-2rdhr" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.587505  128156 pod_ready.go:81] duration metric: took 40.035726456s waiting for pod "coredns-76f75df574-2rdhr" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.587518  128156 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.596994  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.597015  128156 pod_ready.go:81] duration metric: took 9.490538ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.597027  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601904  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.601930  128156 pod_ready.go:81] duration metric: took 4.894855ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.601942  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608643  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.608662  128156 pod_ready.go:81] duration metric: took 6.712079ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.608673  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614595  128156 pod_ready.go:92] pod "kube-proxy-rqhmc" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.614624  128156 pod_ready.go:81] duration metric: took 5.945157ms waiting for pod "kube-proxy-rqhmc" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.614632  128156 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985244  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:24.985272  128156 pod_ready.go:81] duration metric: took 370.631498ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:24.985282  128156 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:27.293707  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.293859  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:28.500792  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:31.002513  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:29.676207  127760 pod_ready.go:102] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:32.172306  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.172339  127760 pod_ready.go:81] duration metric: took 4.519929269s waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.172355  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178133  127760 pod_ready.go:92] pod "kube-proxy-h7zgl" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.178154  127760 pod_ready.go:81] duration metric: took 5.793304ms waiting for pod "kube-proxy-h7zgl" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.178163  127760 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184283  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:18:32.184305  127760 pod_ready.go:81] duration metric: took 6.134863ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:32.184319  127760 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	I1212 23:18:31.792415  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.793837  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:33.499687  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:35.500853  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:34.448290  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.948646  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:36.296844  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.793406  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.001930  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:40.501951  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:38.949791  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.448832  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:41.294594  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.295134  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.000673  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.000747  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:43.452098  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.947475  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:45.793152  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.793282  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.003229  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.499682  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:47.949034  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:50.449118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.455176  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:49.793896  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:52.293413  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.293611  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:51.502870  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.000866  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.002047  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:54.948058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.950946  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:56.791908  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.792808  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:58.500328  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.000549  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:18:59.449089  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:01.948622  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:00.793090  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.294337  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.002131  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.500315  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:03.948920  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.949566  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:05.792376  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.793999  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:08.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.500002  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:07.950271  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.450074  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:10.292457  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.294375  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.503977  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:15.000631  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:12.948486  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.951220  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.448916  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:14.792888  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:16.793429  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.293010  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:17.000916  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.499770  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:19.449088  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.949856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.293433  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.792996  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:21.506787  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.507411  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:26.001279  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:23.950269  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.952818  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:25.793527  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.294892  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.499823  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.500142  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:28.448303  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.449512  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:30.793364  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.293202  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:33.001883  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.500561  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:32.948419  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:34.948716  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:36.949202  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:35.293744  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:37.294070  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:38.001116  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:40.001502  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.449215  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:41.948577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:39.793176  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.292783  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:42.501401  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:45.003364  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:43.950039  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.449043  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:44.792361  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:46.793184  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.294980  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:47.500147  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:49.501096  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:48.449912  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:50.950549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:51.794547  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.298465  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.000382  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:54.005736  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:52.950635  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:55.449330  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:57.449700  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.792615  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:56.499865  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:58.499980  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:00.500389  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:19:59.950151  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:02.447970  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:01.793306  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.793698  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:03.001300  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.499370  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:04.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:06.450549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:05.793804  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.793899  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:07.500520  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.000481  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:08.950058  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:11.449345  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:10.293157  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.293642  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:12.500064  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.500937  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:13.949163  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:16.448489  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:14.793066  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.293467  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.293785  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:17.003921  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:19.501044  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:18.953218  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.449082  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.792447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.794479  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:21.999979  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:24.001269  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.001308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:23.948517  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:25.949879  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:26.292488  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.293405  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.499717  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.500472  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:28.448633  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.455346  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:30.293436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.296063  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:33.004484  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:35.500190  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:32.949307  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.949549  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.447994  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:34.792727  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.292297  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.293185  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:37.501094  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:40.000124  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:39.448914  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.449574  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:41.296498  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.794079  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:42.000667  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:44.500084  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:43.949370  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.448365  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.293571  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.795374  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:46.501287  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:49.000247  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.002102  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:48.449326  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:50.950049  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:51.295712  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.796436  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:53.500278  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.500483  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:52.950509  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:55.448194  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:57.448444  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:56.293432  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.791909  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:58.000148  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.000718  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:20:59.448627  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:01.449178  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:00.793652  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.798916  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:02.501103  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:04.504053  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:03.948376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.949118  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:05.293868  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.796468  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.000140  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:09.500040  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:07.949917  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.449692  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:10.296954  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.793159  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:11.500724  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:13.501811  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:16.000506  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:12.948932  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:14.951174  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.448985  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:15.294394  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:17.792822  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:18.501242  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.000679  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:19.449857  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:21.949137  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:20.293991  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:22.793476  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.501237  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.001069  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:23.950208  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:26.449036  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:25.294562  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:27.792099  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.500763  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.000635  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:28.947918  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:30.949180  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:29.793559  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:31.793709  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:34.292407  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:33.001948  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.002761  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:32.949352  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:35.448233  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.449470  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:36.292723  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:38.792944  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:37.501308  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.001944  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:39.948613  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:41.953252  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:40.793938  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.796054  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:42.499956  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.504598  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:44.453963  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.952856  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:45.292988  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:47.792829  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:46.999714  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.000749  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.000798  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.448592  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:51.461405  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:49.793084  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:52.293550  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.001475  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:55.499894  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:53.952376  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.451000  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:54.793373  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:56.796557  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:59.293830  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:57.501136  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.000501  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:21:58.949246  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:00.949331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:01.792604  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.793283  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:02.501611  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.001210  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:03.449006  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:05.449356  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:06.291970  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:08.293443  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.502381  128282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:07.690392  128282 pod_ready.go:81] duration metric: took 4m0.000056495s waiting for pod "metrics-server-57f55c9bc5-zwzrg" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:07.690437  128282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:07.690447  128282 pod_ready.go:38] duration metric: took 4m3.599656754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:07.690468  128282 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:22:07.690503  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:07.690560  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:07.752216  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:07.752249  128282 cri.go:89] found id: ""
	I1212 23:22:07.752258  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:07.752309  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.757000  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:07.757068  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:07.801367  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:07.801398  128282 cri.go:89] found id: ""
	I1212 23:22:07.801409  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:07.801470  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.806744  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:07.806804  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:07.850495  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:07.850530  128282 cri.go:89] found id: ""
	I1212 23:22:07.850538  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:07.850588  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.855144  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:07.855226  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:07.900092  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:07.900121  128282 cri.go:89] found id: ""
	I1212 23:22:07.900131  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:07.900199  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.904280  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:07.904357  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:07.945991  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:07.946019  128282 cri.go:89] found id: ""
	I1212 23:22:07.946034  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:07.946101  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.951095  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:07.951168  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:07.992586  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:07.992611  128282 cri.go:89] found id: ""
	I1212 23:22:07.992619  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:07.992667  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:07.996887  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:07.996945  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:08.038769  128282 cri.go:89] found id: ""
	I1212 23:22:08.038810  128282 logs.go:284] 0 containers: []
	W1212 23:22:08.038820  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:08.038829  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:08.038892  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:08.081167  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.081202  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.081209  128282 cri.go:89] found id: ""
	I1212 23:22:08.081225  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:08.081282  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.085740  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:08.089816  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:08.089836  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:08.137243  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:08.137274  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:08.180654  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:08.180686  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:08.240646  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:08.240684  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:08.289713  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:08.289753  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:08.440863  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:08.440902  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:08.505477  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:08.505516  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:08.561373  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:08.561411  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:08.626446  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:08.626482  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:08.681726  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:08.681769  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:08.703440  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:08.703468  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:08.739960  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:08.739998  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:09.213821  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:09.213867  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:07.949577  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:09.950086  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.449579  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:10.793412  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:12.794447  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:11.771447  128282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:22:11.787326  128282 api_server.go:72] duration metric: took 4m15.571529815s to wait for apiserver process to appear ...
	I1212 23:22:11.787355  128282 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:22:11.787395  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:11.787459  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:11.841146  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:11.841178  128282 cri.go:89] found id: ""
	I1212 23:22:11.841199  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:11.841263  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.845844  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:11.845917  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:11.895757  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:11.895780  128282 cri.go:89] found id: ""
	I1212 23:22:11.895789  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:11.895846  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.900575  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:11.900641  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:11.941848  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:11.941872  128282 cri.go:89] found id: ""
	I1212 23:22:11.941882  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:11.941962  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:11.948119  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:11.948192  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:11.997102  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:11.997126  128282 cri.go:89] found id: ""
	I1212 23:22:11.997135  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:11.997189  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.002683  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:12.002750  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:12.042120  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:12.042144  128282 cri.go:89] found id: ""
	I1212 23:22:12.042159  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:12.042225  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.047068  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:12.047144  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:12.092055  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:12.092078  128282 cri.go:89] found id: ""
	I1212 23:22:12.092087  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:12.092137  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.097642  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:12.097713  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:12.137481  128282 cri.go:89] found id: ""
	I1212 23:22:12.137521  128282 logs.go:284] 0 containers: []
	W1212 23:22:12.137532  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:12.137542  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:12.137607  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:12.183712  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:12.183735  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.183740  128282 cri.go:89] found id: ""
	I1212 23:22:12.183747  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:12.183813  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.188656  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:12.193613  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:12.193639  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:12.206911  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:12.206941  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:12.258294  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:12.258335  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:12.300901  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:12.300934  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:12.765702  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:12.765746  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:12.909101  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:12.909138  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:12.967049  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:12.967083  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:13.010895  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:13.010930  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:13.062291  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:13.062324  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:13.107276  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:13.107320  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:13.166395  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:13.166448  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:13.212812  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:13.212853  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:13.260977  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:13.261022  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:15.816287  128282 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8444/healthz ...
	I1212 23:22:15.821554  128282 api_server.go:279] https://192.168.39.180:8444/healthz returned 200:
	ok
	I1212 23:22:15.822925  128282 api_server.go:141] control plane version: v1.28.4
	I1212 23:22:15.822945  128282 api_server.go:131] duration metric: took 4.035583432s to wait for apiserver health ...
	I1212 23:22:15.822954  128282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:22:15.822976  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 23:22:15.823024  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 23:22:15.870940  128282 cri.go:89] found id: "71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:15.870981  128282 cri.go:89] found id: ""
	I1212 23:22:15.870993  128282 logs.go:284] 1 containers: [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b]
	I1212 23:22:15.871062  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.876167  128282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 23:22:15.876244  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 23:22:15.916642  128282 cri.go:89] found id: "57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:15.916671  128282 cri.go:89] found id: ""
	I1212 23:22:15.916682  128282 logs.go:284] 1 containers: [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9]
	I1212 23:22:15.916747  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.921173  128282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 23:22:15.921238  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 23:22:15.963421  128282 cri.go:89] found id: "79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:15.963449  128282 cri.go:89] found id: ""
	I1212 23:22:15.963461  128282 logs.go:284] 1 containers: [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954]
	I1212 23:22:15.963521  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:15.967747  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 23:22:15.967821  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 23:22:14.949925  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.949999  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:15.294181  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:17.793324  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:16.011046  128282 cri.go:89] found id: "d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.011071  128282 cri.go:89] found id: ""
	I1212 23:22:16.011079  128282 logs.go:284] 1 containers: [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9]
	I1212 23:22:16.011128  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.015592  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 23:22:16.015659  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 23:22:16.058065  128282 cri.go:89] found id: "fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:16.058092  128282 cri.go:89] found id: ""
	I1212 23:22:16.058103  128282 logs.go:284] 1 containers: [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088]
	I1212 23:22:16.058157  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.062334  128282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 23:22:16.062398  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 23:22:16.105032  128282 cri.go:89] found id: "901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:16.105062  128282 cri.go:89] found id: ""
	I1212 23:22:16.105074  128282 logs.go:284] 1 containers: [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee]
	I1212 23:22:16.105140  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.109674  128282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 23:22:16.109728  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 23:22:16.151188  128282 cri.go:89] found id: ""
	I1212 23:22:16.151221  128282 logs.go:284] 0 containers: []
	W1212 23:22:16.151230  128282 logs.go:286] No container was found matching "kindnet"
	I1212 23:22:16.151246  128282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 23:22:16.151314  128282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 23:22:16.196149  128282 cri.go:89] found id: "61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:16.196191  128282 cri.go:89] found id: "8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.196199  128282 cri.go:89] found id: ""
	I1212 23:22:16.196209  128282 logs.go:284] 2 containers: [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988]
	I1212 23:22:16.196272  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.201690  128282 ssh_runner.go:195] Run: which crictl
	I1212 23:22:16.205939  128282 logs.go:123] Gathering logs for describe nodes ...
	I1212 23:22:16.205970  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 23:22:16.358186  128282 logs.go:123] Gathering logs for etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] ...
	I1212 23:22:16.358236  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9"
	I1212 23:22:16.404737  128282 logs.go:123] Gathering logs for kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] ...
	I1212 23:22:16.404780  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9"
	I1212 23:22:16.449040  128282 logs.go:123] Gathering logs for storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] ...
	I1212 23:22:16.449069  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988"
	I1212 23:22:16.491141  128282 logs.go:123] Gathering logs for CRI-O ...
	I1212 23:22:16.491173  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 23:22:16.860522  128282 logs.go:123] Gathering logs for dmesg ...
	I1212 23:22:16.860578  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 23:22:16.877982  128282 logs.go:123] Gathering logs for kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] ...
	I1212 23:22:16.878030  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b"
	I1212 23:22:16.923301  128282 logs.go:123] Gathering logs for coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] ...
	I1212 23:22:16.923338  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954"
	I1212 23:22:16.965351  128282 logs.go:123] Gathering logs for kubelet ...
	I1212 23:22:16.965382  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 23:22:17.024559  128282 logs.go:123] Gathering logs for container status ...
	I1212 23:22:17.024603  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 23:22:17.079193  128282 logs.go:123] Gathering logs for kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] ...
	I1212 23:22:17.079229  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088"
	I1212 23:22:17.123956  128282 logs.go:123] Gathering logs for kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] ...
	I1212 23:22:17.124003  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee"
	I1212 23:22:17.202000  128282 logs.go:123] Gathering logs for storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] ...
	I1212 23:22:17.202043  128282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c"
	I1212 23:22:19.755866  128282 system_pods.go:59] 8 kube-system pods found
	I1212 23:22:19.755901  128282 system_pods.go:61] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.755907  128282 system_pods.go:61] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.755914  128282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.755922  128282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.755929  128282 system_pods.go:61] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.755936  128282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.755946  128282 system_pods.go:61] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.755954  128282 system_pods.go:61] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.755963  128282 system_pods.go:74] duration metric: took 3.933003633s to wait for pod list to return data ...
	I1212 23:22:19.755977  128282 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:22:19.758618  128282 default_sa.go:45] found service account: "default"
	I1212 23:22:19.758639  128282 default_sa.go:55] duration metric: took 2.655294ms for default service account to be created ...
	I1212 23:22:19.758647  128282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:22:19.764376  128282 system_pods.go:86] 8 kube-system pods found
	I1212 23:22:19.764398  128282 system_pods.go:89] "coredns-5dd5756b68-nrpzf" [bfe81238-05e0-4f68-8a23-d212eb2a24f2] Running
	I1212 23:22:19.764404  128282 system_pods.go:89] "etcd-default-k8s-diff-port-850839" [ff9bc7f8-7c4b-4cf4-9710-581a2313be6b] Running
	I1212 23:22:19.764409  128282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850839" [f9fc74e6-f9fe-46f4-8c52-e335768ffe62] Running
	I1212 23:22:19.764414  128282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850839" [caecc6dd-ff97-4a63-ba3e-8013350590ea] Running
	I1212 23:22:19.764418  128282 system_pods.go:89] "kube-proxy-wjrjj" [fa659f1c-88de-406d-8183-bcac6f529efc] Running
	I1212 23:22:19.764432  128282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850839" [a080c517-c170-4867-81c0-675335aa9c02] Running
	I1212 23:22:19.764444  128282 system_pods.go:89] "metrics-server-57f55c9bc5-zwzrg" [8b0d823e-df34-45eb-807c-17d8a9178bb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:22:19.764454  128282 system_pods.go:89] "storage-provisioner" [0570ec42-4a53-4688-ac93-ee10fc58313d] Running
	I1212 23:22:19.764464  128282 system_pods.go:126] duration metric: took 5.811076ms to wait for k8s-apps to be running ...
	I1212 23:22:19.764475  128282 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:22:19.764531  128282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:19.781048  128282 system_svc.go:56] duration metric: took 16.561836ms WaitForService to wait for kubelet.
	I1212 23:22:19.781100  128282 kubeadm.go:581] duration metric: took 4m23.565309829s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:22:19.781129  128282 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:22:19.784205  128282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:22:19.784229  128282 node_conditions.go:123] node cpu capacity is 2
	I1212 23:22:19.784240  128282 node_conditions.go:105] duration metric: took 3.105926ms to run NodePressure ...
	I1212 23:22:19.784253  128282 start.go:228] waiting for startup goroutines ...
	I1212 23:22:19.784259  128282 start.go:233] waiting for cluster config update ...
	I1212 23:22:19.784269  128282 start.go:242] writing updated cluster config ...
	I1212 23:22:19.784545  128282 ssh_runner.go:195] Run: rm -f paused
	I1212 23:22:19.840938  128282 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:22:19.842885  128282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850839" cluster and "default" namespace by default
	I1212 23:22:19.449331  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:21.449778  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:20.294156  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:22.792746  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:23.949834  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:26.452555  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.793601  128156 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:24.985518  128156 pod_ready.go:81] duration metric: took 4m0.000203674s waiting for pod "metrics-server-57f55c9bc5-b42rv" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:24.985551  128156 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:24.985571  128156 pod_ready.go:38] duration metric: took 4m40.456239368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:24.985600  128156 kubeadm.go:640] restartCluster took 5m2.616770336s
	W1212 23:22:24.985660  128156 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:24.985690  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:28.949293  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:31.449689  127760 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace has status "Ready":"False"
	I1212 23:22:32.184476  127760 pod_ready.go:81] duration metric: took 4m0.000136331s waiting for pod "metrics-server-57f55c9bc5-mxsd2" in "kube-system" namespace to be "Ready" ...
	E1212 23:22:32.184516  127760 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1212 23:22:32.184559  127760 pod_ready.go:38] duration metric: took 4m12.59080567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:22:32.184598  127760 kubeadm.go:640] restartCluster took 4m33.093698567s
	W1212 23:22:32.184674  127760 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1212 23:22:32.184715  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 23:22:39.117782  128156 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.132057077s)
	I1212 23:22:39.117868  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:39.132912  128156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:39.143453  128156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:39.153628  128156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:39.153684  128156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:39.374201  128156 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:46.310264  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.12551082s)
	I1212 23:22:46.310350  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:22:46.327577  127760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:22:46.339177  127760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:22:46.350355  127760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:22:46.350407  127760 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:22:46.414859  127760 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 23:22:46.414971  127760 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:46.599881  127760 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:46.600039  127760 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:46.600208  127760 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:46.867542  127760 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:46.869398  127760 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:46.869528  127760 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:46.869659  127760 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:46.869770  127760 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:46.869933  127760 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:46.870496  127760 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:46.871021  127760 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:46.871802  127760 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:46.873187  127760 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:46.874737  127760 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:46.876316  127760 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:46.877713  127760 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:46.877769  127760 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:47.211156  127760 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:47.370652  127760 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:47.491927  127760 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:47.746007  127760 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:47.746996  127760 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:47.749868  127760 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:47.751553  127760 out.go:204]   - Booting up control plane ...
	I1212 23:22:47.751724  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:47.751814  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:47.752662  127760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:47.770296  127760 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:47.770438  127760 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:47.770546  127760 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.362262  128156 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 23:22:51.362341  128156 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:22:51.362461  128156 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:22:51.362593  128156 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:22:51.362706  128156 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:22:51.362781  128156 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:22:51.364439  128156 out.go:204]   - Generating certificates and keys ...
	I1212 23:22:51.364561  128156 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:22:51.364660  128156 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:22:51.364758  128156 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 23:22:51.364840  128156 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1212 23:22:51.364971  128156 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 23:22:51.365060  128156 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1212 23:22:51.365137  128156 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1212 23:22:51.365215  128156 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1212 23:22:51.365320  128156 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 23:22:51.365425  128156 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 23:22:51.365479  128156 kubeadm.go:322] [certs] Using the existing "sa" key
	I1212 23:22:51.365553  128156 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:22:51.365626  128156 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:22:51.365706  128156 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 23:22:51.365778  128156 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:22:51.365859  128156 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:22:51.365936  128156 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:22:51.366046  128156 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:22:51.366131  128156 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:22:51.368190  128156 out.go:204]   - Booting up control plane ...
	I1212 23:22:51.368316  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:22:51.368421  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:22:51.368517  128156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:22:51.368649  128156 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:22:51.368763  128156 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:22:51.368813  128156 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:22:51.369013  128156 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.369107  128156 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503652 seconds
	I1212 23:22:51.369231  128156 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:51.369390  128156 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:51.369465  128156 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:51.369709  128156 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-115023 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:51.369780  128156 kubeadm.go:322] [bootstrap-token] Using token: agyzoj.wkr94b17dt19k7yx
	I1212 23:22:51.371110  128156 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:51.371306  128156 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:51.371421  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:51.371643  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:51.371825  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:51.371975  128156 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:51.372085  128156 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:51.372226  128156 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:51.372285  128156 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:51.372344  128156 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:51.372353  128156 kubeadm.go:322] 
	I1212 23:22:51.372425  128156 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:51.372437  128156 kubeadm.go:322] 
	I1212 23:22:51.372529  128156 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:51.372540  128156 kubeadm.go:322] 
	I1212 23:22:51.372571  128156 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:51.372645  128156 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:51.372711  128156 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:51.372720  128156 kubeadm.go:322] 
	I1212 23:22:51.372793  128156 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:51.372804  128156 kubeadm.go:322] 
	I1212 23:22:51.372861  128156 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:51.372871  128156 kubeadm.go:322] 
	I1212 23:22:51.372933  128156 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:51.373050  128156 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:51.373137  128156 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:51.373149  128156 kubeadm.go:322] 
	I1212 23:22:51.373248  128156 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:51.373345  128156 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:51.373356  128156 kubeadm.go:322] 
	I1212 23:22:51.373456  128156 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373583  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:51.373613  128156 kubeadm.go:322] 	--control-plane 
	I1212 23:22:51.373623  128156 kubeadm.go:322] 
	I1212 23:22:51.373724  128156 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:51.373739  128156 kubeadm.go:322] 
	I1212 23:22:51.373842  128156 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token agyzoj.wkr94b17dt19k7yx \
	I1212 23:22:51.373985  128156 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:51.374006  128156 cni.go:84] Creating CNI manager for ""
	I1212 23:22:51.374015  128156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:51.375563  128156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:47.945457  127760 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:22:51.376861  128156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:51.414215  128156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:51.484549  128156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:51.484635  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.484696  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=no-preload-115023 minikube.k8s.io/updated_at=2023_12_12T23_22_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:51.564599  128156 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:51.924093  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.026923  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:52.628483  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.128275  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:53.628006  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:54.127897  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.450625  127760 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504757 seconds
	I1212 23:22:56.450779  127760 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:22:56.468441  127760 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:22:57.003074  127760 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:22:57.003292  127760 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-809120 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:22:57.518097  127760 kubeadm.go:322] [bootstrap-token] Using token: ichlu8.wzw1wbhrbc06xbtw
	I1212 23:22:57.519536  127760 out.go:204]   - Configuring RBAC rules ...
	I1212 23:22:57.519639  127760 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:22:57.528652  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:22:57.538325  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:22:57.542226  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:22:57.551395  127760 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:22:57.556988  127760 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:22:57.573462  127760 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:22:57.833933  127760 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:22:57.949764  127760 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:22:57.949788  127760 kubeadm.go:322] 
	I1212 23:22:57.949888  127760 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:22:57.949913  127760 kubeadm.go:322] 
	I1212 23:22:57.950013  127760 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:22:57.950036  127760 kubeadm.go:322] 
	I1212 23:22:57.950079  127760 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:22:57.950155  127760 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:22:57.950228  127760 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:22:57.950240  127760 kubeadm.go:322] 
	I1212 23:22:57.950301  127760 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:22:57.950311  127760 kubeadm.go:322] 
	I1212 23:22:57.950375  127760 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:22:57.950385  127760 kubeadm.go:322] 
	I1212 23:22:57.950468  127760 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:22:57.950578  127760 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:22:57.950678  127760 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:22:57.950702  127760 kubeadm.go:322] 
	I1212 23:22:57.950818  127760 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:22:57.950916  127760 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:22:57.950926  127760 kubeadm.go:322] 
	I1212 23:22:57.951054  127760 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951199  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:22:57.951231  127760 kubeadm.go:322] 	--control-plane 
	I1212 23:22:57.951266  127760 kubeadm.go:322] 
	I1212 23:22:57.951386  127760 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:22:57.951396  127760 kubeadm.go:322] 
	I1212 23:22:57.951494  127760 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ichlu8.wzw1wbhrbc06xbtw \
	I1212 23:22:57.951619  127760 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:22:57.952303  127760 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:22:57.952326  127760 cni.go:84] Creating CNI manager for ""
	I1212 23:22:57.952337  127760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:22:57.954692  127760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:22:54.628965  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.127922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:55.627980  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.128047  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:56.628471  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.128456  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.628284  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.128528  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.628480  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.128296  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:57.955898  127760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:22:57.975567  127760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:22:58.044612  127760 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:22:58.044741  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.044746  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=embed-certs-809120 minikube.k8s.io/updated_at=2023_12_12T23_22_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.158788  127760 ops.go:34] apiserver oom_adj: -16
	I1212 23:22:58.375305  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:58.487117  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.075465  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.575132  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.075781  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.575754  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.075376  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.575524  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.075163  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.574821  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:22:59.628475  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.128509  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:00.628837  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.128959  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:01.627976  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.128077  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:02.628493  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.128203  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.628549  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.127987  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.627922  128156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.756882  128156 kubeadm.go:1088] duration metric: took 13.272316322s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:04.756928  128156 kubeadm.go:406] StartCluster complete in 5m42.440524658s
	I1212 23:23:04.756955  128156 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.757069  128156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:04.759734  128156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:04.760081  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:04.760220  128156 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:04.760311  128156 addons.go:69] Setting storage-provisioner=true in profile "no-preload-115023"
	I1212 23:23:04.760325  128156 addons.go:69] Setting default-storageclass=true in profile "no-preload-115023"
	I1212 23:23:04.760358  128156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-115023"
	I1212 23:23:04.760385  128156 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:23:04.760332  128156 addons.go:231] Setting addon storage-provisioner=true in "no-preload-115023"
	W1212 23:23:04.760426  128156 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:04.760497  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760337  128156 addons.go:69] Setting metrics-server=true in profile "no-preload-115023"
	I1212 23:23:04.760525  128156 addons.go:231] Setting addon metrics-server=true in "no-preload-115023"
	W1212 23:23:04.760538  128156 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:04.760577  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.760759  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760787  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.760953  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760986  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.760995  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.761010  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.777848  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I1212 23:23:04.778063  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I1212 23:23:04.778315  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778479  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.778613  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38509
	I1212 23:23:04.778931  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778945  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.778952  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.778957  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779020  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.779302  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.779561  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.779726  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.779749  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.779929  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.779961  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.780516  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.781173  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.781207  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.783399  128156 addons.go:231] Setting addon default-storageclass=true in "no-preload-115023"
	W1212 23:23:04.783422  128156 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:04.783452  128156 host.go:66] Checking if "no-preload-115023" exists ...
	I1212 23:23:04.783871  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.783906  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.797493  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 23:23:04.797741  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I1212 23:23:04.798102  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798132  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.798613  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798630  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.798956  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.798985  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.799262  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799375  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.799438  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.799639  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.801934  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.802007  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.803861  128156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:04.802341  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I1212 23:23:04.806911  128156 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:04.805759  128156 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:04.806058  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.808825  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:04.808833  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:04.808848  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:04.808856  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.808863  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.809266  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.809281  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.809624  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.810352  128156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:04.810381  128156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:04.813139  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813629  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.813654  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813828  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.813882  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814303  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.814333  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.814148  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814542  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.814625  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.814797  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.814855  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.814954  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.815127  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.823127  128156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-115023" context rescaled to 1 replicas
	I1212 23:23:04.823174  128156 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.32 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:04.824991  128156 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:04.826596  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:04.827821  128156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I1212 23:23:04.828256  128156 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:04.828820  128156 main.go:141] libmachine: Using API Version  1
	I1212 23:23:04.828845  128156 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:04.829390  128156 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:04.829741  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetState
	I1212 23:23:04.834167  128156 main.go:141] libmachine: (no-preload-115023) Calling .DriverName
	I1212 23:23:04.834521  128156 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:04.834539  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:04.834563  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHHostname
	I1212 23:23:04.838055  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838555  128156 main.go:141] libmachine: (no-preload-115023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:84:7a", ip: ""} in network mk-no-preload-115023: {Iface:virbr4 ExpiryTime:2023-12-13 00:16:54 +0000 UTC Type:0 Mac:52:54:00:5e:84:7a Iaid: IPaddr:192.168.72.32 Prefix:24 Hostname:no-preload-115023 Clientid:01:52:54:00:5e:84:7a}
	I1212 23:23:04.838587  128156 main.go:141] libmachine: (no-preload-115023) DBG | domain no-preload-115023 has defined IP address 192.168.72.32 and MAC address 52:54:00:5e:84:7a in network mk-no-preload-115023
	I1212 23:23:04.838772  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHPort
	I1212 23:23:04.838964  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHKeyPath
	I1212 23:23:04.839119  128156 main.go:141] libmachine: (no-preload-115023) Calling .GetSSHUsername
	I1212 23:23:04.839284  128156 sshutil.go:53] new ssh client: &{IP:192.168.72.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/no-preload-115023/id_rsa Username:docker}
	I1212 23:23:04.972964  128156 node_ready.go:35] waiting up to 6m0s for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.973014  128156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:04.998182  128156 node_ready.go:49] node "no-preload-115023" has status "Ready":"True"
	I1212 23:23:04.998214  128156 node_ready.go:38] duration metric: took 25.214785ms waiting for node "no-preload-115023" to be "Ready" ...
	I1212 23:23:04.998226  128156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:05.012036  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:05.027954  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:05.027977  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:05.063451  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:05.076403  128156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:05.119924  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:05.119957  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:05.216413  128156 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.216443  128156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:05.285434  128156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:05.817542  128156 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:06.316381  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.252894593s)
	I1212 23:23:06.316378  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.304291472s)
	I1212 23:23:06.316446  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316460  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316491  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316509  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.316903  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.316959  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.316966  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.316986  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316916  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317010  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.317022  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.316995  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317032  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.317327  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.317387  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.317408  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.318858  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.318881  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.366104  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.366135  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.366427  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.366481  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.366492  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618093  128156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.332604197s)
	I1212 23:23:06.618161  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618183  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618643  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.618665  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.618676  128156 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:06.618684  128156 main.go:141] libmachine: (no-preload-115023) Calling .Close
	I1212 23:23:06.618845  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620326  128156 main.go:141] libmachine: (no-preload-115023) DBG | Closing plugin on server side
	I1212 23:23:06.620340  128156 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:06.620363  128156 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:06.620384  128156 addons.go:467] Verifying addon metrics-server=true in "no-preload-115023"
	I1212 23:23:06.622226  128156 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:03.075069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:03.575772  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.074921  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:04.575481  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.075785  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:05.575855  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.075276  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.575017  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.075100  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:07.575342  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:06.623716  128156 addons.go:502] enable addons completed in 1.863496659s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:07.165490  128156 pod_ready.go:102] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:08.161341  128156 pod_ready.go:92] pod "coredns-76f75df574-9wxzk" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.161380  128156 pod_ready.go:81] duration metric: took 3.084948492s waiting for pod "coredns-76f75df574-9wxzk" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.161395  128156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169259  128156 pod_ready.go:92] pod "etcd-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.169294  128156 pod_ready.go:81] duration metric: took 7.890109ms waiting for pod "etcd-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.169309  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176068  128156 pod_ready.go:92] pod "kube-apiserver-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.176097  128156 pod_ready.go:81] duration metric: took 6.779109ms waiting for pod "kube-apiserver-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.176111  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183056  128156 pod_ready.go:92] pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:08.183085  128156 pod_ready.go:81] duration metric: took 6.964809ms waiting for pod "kube-controller-manager-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:08.183099  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066100  128156 pod_ready.go:92] pod "kube-proxy-qs95k" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.066123  128156 pod_ready.go:81] duration metric: took 883.017234ms waiting for pod "kube-proxy-qs95k" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.066132  128156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357841  128156 pod_ready.go:92] pod "kube-scheduler-no-preload-115023" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:09.357874  128156 pod_ready.go:81] duration metric: took 291.734639ms waiting for pod "kube-scheduler-no-preload-115023" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:09.357884  128156 pod_ready.go:38] duration metric: took 4.359648281s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:09.357904  128156 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:09.357970  128156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:09.372791  128156 api_server.go:72] duration metric: took 4.549577037s to wait for apiserver process to appear ...
	I1212 23:23:09.372820  128156 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:09.372841  128156 api_server.go:253] Checking apiserver healthz at https://192.168.72.32:8443/healthz ...
	I1212 23:23:09.378375  128156 api_server.go:279] https://192.168.72.32:8443/healthz returned 200:
	ok
	I1212 23:23:09.379855  128156 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:23:09.379882  128156 api_server.go:131] duration metric: took 7.054126ms to wait for apiserver health ...
	I1212 23:23:09.379893  128156 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:09.561188  128156 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:09.561216  128156 system_pods.go:61] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.561221  128156 system_pods.go:61] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.561225  128156 system_pods.go:61] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.561229  128156 system_pods.go:61] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.561235  128156 system_pods.go:61] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.561239  128156 system_pods.go:61] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.561245  128156 system_pods.go:61] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.561249  128156 system_pods.go:61] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.561257  128156 system_pods.go:74] duration metric: took 181.358443ms to wait for pod list to return data ...
	I1212 23:23:09.561265  128156 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:09.756864  128156 default_sa.go:45] found service account: "default"
	I1212 23:23:09.756894  128156 default_sa.go:55] duration metric: took 195.622122ms for default service account to be created ...
	I1212 23:23:09.756905  128156 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:09.960670  128156 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:09.960700  128156 system_pods.go:89] "coredns-76f75df574-9wxzk" [6c1b5bb4-619d-48a2-9c81-060018616240] Running
	I1212 23:23:09.960705  128156 system_pods.go:89] "etcd-no-preload-115023" [3d51f898-1a22-4a89-9882-c9e5b177b48b] Running
	I1212 23:23:09.960710  128156 system_pods.go:89] "kube-apiserver-no-preload-115023" [5c939fc1-065c-4d76-a103-fc00df53e2ca] Running
	I1212 23:23:09.960715  128156 system_pods.go:89] "kube-controller-manager-no-preload-115023" [d268b7e4-88d2-4539-af42-365dd1056e38] Running
	I1212 23:23:09.960719  128156 system_pods.go:89] "kube-proxy-qs95k" [5d936172-0411-4163-a62a-25a11d4ac2f4] Running
	I1212 23:23:09.960723  128156 system_pods.go:89] "kube-scheduler-no-preload-115023" [19824039-9498-4722-92bd-9b052641e96a] Running
	I1212 23:23:09.960729  128156 system_pods.go:89] "metrics-server-57f55c9bc5-wlql5" [d9786845-dc0b-4120-be39-2ddde167b817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:09.960735  128156 system_pods.go:89] "storage-provisioner" [5e1865df-d2a5-4ebe-be00-20aa7a752e65] Running
	I1212 23:23:09.960744  128156 system_pods.go:126] duration metric: took 203.831934ms to wait for k8s-apps to be running ...
	I1212 23:23:09.960754  128156 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:09.960805  128156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:09.974511  128156 system_svc.go:56] duration metric: took 13.742619ms WaitForService to wait for kubelet.
	I1212 23:23:09.974543  128156 kubeadm.go:581] duration metric: took 5.15133848s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:09.974571  128156 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:10.158679  128156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:10.158708  128156 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:10.158717  128156 node_conditions.go:105] duration metric: took 184.140544ms to run NodePressure ...
	I1212 23:23:10.158730  128156 start.go:228] waiting for startup goroutines ...
	I1212 23:23:10.158736  128156 start.go:233] waiting for cluster config update ...
	I1212 23:23:10.158746  128156 start.go:242] writing updated cluster config ...
	I1212 23:23:10.158996  128156 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:10.222646  128156 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 23:23:10.224867  128156 out.go:177] * Done! kubectl is now configured to use "no-preload-115023" cluster and "default" namespace by default
	I1212 23:23:08.075026  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:08.574992  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.075693  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:09.575069  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.075713  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:10.575464  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.075090  127760 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:23:11.250257  127760 kubeadm.go:1088] duration metric: took 13.205579442s to wait for elevateKubeSystemPrivileges.
	I1212 23:23:11.250290  127760 kubeadm.go:406] StartCluster complete in 5m12.212668558s
	I1212 23:23:11.250312  127760 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.250409  127760 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:23:11.253977  127760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:23:11.254241  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:23:11.254250  127760 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:23:11.254337  127760 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-809120"
	I1212 23:23:11.254351  127760 addons.go:69] Setting default-storageclass=true in profile "embed-certs-809120"
	I1212 23:23:11.254358  127760 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-809120"
	W1212 23:23:11.254366  127760 addons.go:240] addon storage-provisioner should already be in state true
	I1212 23:23:11.254369  127760 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-809120"
	I1212 23:23:11.254422  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254431  127760 addons.go:69] Setting metrics-server=true in profile "embed-certs-809120"
	I1212 23:23:11.254457  127760 addons.go:231] Setting addon metrics-server=true in "embed-certs-809120"
	W1212 23:23:11.254466  127760 addons.go:240] addon metrics-server should already be in state true
	I1212 23:23:11.254466  127760 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:23:11.254510  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.254798  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254802  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254845  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.254902  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.254933  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.255058  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.272689  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I1212 23:23:11.272926  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I1212 23:23:11.273095  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273297  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273444  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I1212 23:23:11.273710  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273722  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.273784  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.273935  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.273947  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274773  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.274917  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.274942  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.275403  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.275452  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.275615  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.275776  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.276164  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.276199  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.279953  127760 addons.go:231] Setting addon default-storageclass=true in "embed-certs-809120"
	W1212 23:23:11.279984  127760 addons.go:240] addon default-storageclass should already be in state true
	I1212 23:23:11.280016  127760 host.go:66] Checking if "embed-certs-809120" exists ...
	I1212 23:23:11.280439  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.280488  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.296262  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I1212 23:23:11.296273  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I1212 23:23:11.296731  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.296839  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.297284  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297296  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.297304  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297315  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.297662  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297722  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.297820  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.297867  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I1212 23:23:11.297876  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.298202  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.298805  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.298823  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.299106  127760 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-809120" context rescaled to 1 replicas
	I1212 23:23:11.299151  127760 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:23:11.300876  127760 out.go:177] * Verifying Kubernetes components...
	I1212 23:23:11.299808  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.299838  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.299990  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.302374  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:11.303907  127760 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:23:11.305369  127760 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 23:23:11.302872  127760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:23:11.307972  127760 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.307992  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:23:11.308012  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306693  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 23:23:11.308064  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 23:23:11.308088  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.306729  127760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:23:11.312550  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312826  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.312853  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.312892  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313337  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.313477  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.313493  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313524  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.313558  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.313610  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.313772  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.313988  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.314165  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.314287  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.334457  127760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1212 23:23:11.335025  127760 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:23:11.335687  127760 main.go:141] libmachine: Using API Version  1
	I1212 23:23:11.335719  127760 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:23:11.336130  127760 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:23:11.336356  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetState
	I1212 23:23:11.338062  127760 main.go:141] libmachine: (embed-certs-809120) Calling .DriverName
	I1212 23:23:11.338356  127760 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.338380  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:23:11.338407  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHHostname
	I1212 23:23:11.341489  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342079  127760 main.go:141] libmachine: (embed-certs-809120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:a9:e8", ip: ""} in network mk-embed-certs-809120: {Iface:virbr2 ExpiryTime:2023-12-13 00:08:09 +0000 UTC Type:0 Mac:52:54:00:1c:a9:e8 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:embed-certs-809120 Clientid:01:52:54:00:1c:a9:e8}
	I1212 23:23:11.342119  127760 main.go:141] libmachine: (embed-certs-809120) DBG | domain embed-certs-809120 has defined IP address 192.168.50.221 and MAC address 52:54:00:1c:a9:e8 in network mk-embed-certs-809120
	I1212 23:23:11.342283  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHPort
	I1212 23:23:11.342499  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHKeyPath
	I1212 23:23:11.342642  127760 main.go:141] libmachine: (embed-certs-809120) Calling .GetSSHUsername
	I1212 23:23:11.342823  127760 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/embed-certs-809120/id_rsa Username:docker}
	I1212 23:23:11.562179  127760 node_ready.go:35] waiting up to 6m0s for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.562383  127760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:23:11.573888  127760 node_ready.go:49] node "embed-certs-809120" has status "Ready":"True"
	I1212 23:23:11.573909  127760 node_ready.go:38] duration metric: took 11.694074ms waiting for node "embed-certs-809120" to be "Ready" ...
	I1212 23:23:11.573919  127760 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:11.591310  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:11.634553  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:23:11.672164  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:23:11.681199  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 23:23:11.681232  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 23:23:11.910291  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 23:23:11.910325  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 23:23:11.993110  127760 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:11.993135  127760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 23:23:12.043047  127760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 23:23:13.550517  127760 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.988091372s)
	I1212 23:23:13.550558  127760 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1212 23:23:13.642966  127760 pod_ready.go:102] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"False"
	I1212 23:23:14.387226  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.752630931s)
	I1212 23:23:14.387298  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387315  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387321  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.715126034s)
	I1212 23:23:14.387345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387359  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387641  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387663  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387675  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387690  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.387776  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.387801  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.387811  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.387819  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.388233  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388247  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388248  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.388285  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.388291  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.388345  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.426683  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.426713  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.427017  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.427030  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.427038  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.477873  127760 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.434777303s)
	I1212 23:23:14.477930  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.477944  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478303  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478321  127760 main.go:141] libmachine: (embed-certs-809120) DBG | Closing plugin on server side
	I1212 23:23:14.478333  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478345  127760 main.go:141] libmachine: Making call to close driver server
	I1212 23:23:14.478357  127760 main.go:141] libmachine: (embed-certs-809120) Calling .Close
	I1212 23:23:14.478607  127760 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:23:14.478622  127760 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:23:14.478632  127760 addons.go:467] Verifying addon metrics-server=true in "embed-certs-809120"
	I1212 23:23:14.480500  127760 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1212 23:23:14.481900  127760 addons.go:502] enable addons completed in 3.227656537s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1212 23:23:15.629572  127760 pod_ready.go:92] pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.629599  127760 pod_ready.go:81] duration metric: took 4.038262674s waiting for pod "coredns-5dd5756b68-qz4fn" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.629608  127760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.638502  127760 pod_ready.go:97] error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638532  127760 pod_ready.go:81] duration metric: took 8.918039ms waiting for pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace to be "Ready" ...
	E1212 23:23:15.638547  127760 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-vc5hr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vc5hr" not found
	I1212 23:23:15.638556  127760 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647047  127760 pod_ready.go:92] pod "etcd-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.647075  127760 pod_ready.go:81] duration metric: took 8.510672ms waiting for pod "etcd-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.647089  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655068  127760 pod_ready.go:92] pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.655091  127760 pod_ready.go:81] duration metric: took 7.994932ms waiting for pod "kube-apiserver-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.655100  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664338  127760 pod_ready.go:92] pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:15.664386  127760 pod_ready.go:81] duration metric: took 9.26869ms waiting for pod "kube-controller-manager-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:15.664401  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732454  127760 pod_ready.go:92] pod "kube-proxy-4nb6w" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:16.732480  127760 pod_ready.go:81] duration metric: took 1.068071012s waiting for pod "kube-proxy-4nb6w" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:16.732489  127760 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022376  127760 pod_ready.go:92] pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace has status "Ready":"True"
	I1212 23:23:17.022402  127760 pod_ready.go:81] duration metric: took 289.906446ms waiting for pod "kube-scheduler-embed-certs-809120" in "kube-system" namespace to be "Ready" ...
	I1212 23:23:17.022423  127760 pod_ready.go:38] duration metric: took 5.448491831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 23:23:17.022445  127760 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:23:17.022494  127760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:23:17.039594  127760 api_server.go:72] duration metric: took 5.740406855s to wait for apiserver process to appear ...
	I1212 23:23:17.039620  127760 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:23:17.039637  127760 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8443/healthz ...
	I1212 23:23:17.044745  127760 api_server.go:279] https://192.168.50.221:8443/healthz returned 200:
	ok
	I1212 23:23:17.046494  127760 api_server.go:141] control plane version: v1.28.4
	I1212 23:23:17.046521  127760 api_server.go:131] duration metric: took 6.894306ms to wait for apiserver health ...
	I1212 23:23:17.046531  127760 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:23:17.227869  127760 system_pods.go:59] 8 kube-system pods found
	I1212 23:23:17.227899  127760 system_pods.go:61] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.227904  127760 system_pods.go:61] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.227909  127760 system_pods.go:61] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.227913  127760 system_pods.go:61] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.227916  127760 system_pods.go:61] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.227920  127760 system_pods.go:61] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.227927  127760 system_pods.go:61] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.227933  127760 system_pods.go:61] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.227944  127760 system_pods.go:74] duration metric: took 181.405975ms to wait for pod list to return data ...
	I1212 23:23:17.227962  127760 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:23:17.423151  127760 default_sa.go:45] found service account: "default"
	I1212 23:23:17.423181  127760 default_sa.go:55] duration metric: took 195.20215ms for default service account to be created ...
	I1212 23:23:17.423190  127760 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 23:23:17.627077  127760 system_pods.go:86] 8 kube-system pods found
	I1212 23:23:17.627104  127760 system_pods.go:89] "coredns-5dd5756b68-qz4fn" [54a2e604-2026-486a-befa-f5a310cb017e] Running
	I1212 23:23:17.627109  127760 system_pods.go:89] "etcd-embed-certs-809120" [c385f00e-c988-486d-96d5-ae7b71e10f82] Running
	I1212 23:23:17.627114  127760 system_pods.go:89] "kube-apiserver-embed-certs-809120" [d5a4db23-8738-4cbc-8b25-86e61d82d009] Running
	I1212 23:23:17.627118  127760 system_pods.go:89] "kube-controller-manager-embed-certs-809120" [dc24baca-6be4-4b68-b2d2-77b83180e49d] Running
	I1212 23:23:17.627124  127760 system_pods.go:89] "kube-proxy-4nb6w" [a79e36cc-eaa9-45da-8a3e-414424129991] Running
	I1212 23:23:17.627128  127760 system_pods.go:89] "kube-scheduler-embed-certs-809120" [3d8e560f-f28b-418c-9a99-b98f8104be50] Running
	I1212 23:23:17.627135  127760 system_pods.go:89] "metrics-server-57f55c9bc5-m6nc6" [e12a702a-24d8-4b08-9ca3-a1b79f5df5e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 23:23:17.627139  127760 system_pods.go:89] "storage-provisioner" [4a660d9e-2a10-49de-bb1d-fd237aa3345e] Running
	I1212 23:23:17.627147  127760 system_pods.go:126] duration metric: took 203.952951ms to wait for k8s-apps to be running ...
	I1212 23:23:17.627155  127760 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 23:23:17.627197  127760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:23:17.641949  127760 system_svc.go:56] duration metric: took 14.784378ms WaitForService to wait for kubelet.
	I1212 23:23:17.641979  127760 kubeadm.go:581] duration metric: took 6.342797652s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 23:23:17.642005  127760 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:23:17.823169  127760 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:23:17.823201  127760 node_conditions.go:123] node cpu capacity is 2
	I1212 23:23:17.823214  127760 node_conditions.go:105] duration metric: took 181.202017ms to run NodePressure ...
	I1212 23:23:17.823230  127760 start.go:228] waiting for startup goroutines ...
	I1212 23:23:17.823258  127760 start.go:233] waiting for cluster config update ...
	I1212 23:23:17.823276  127760 start.go:242] writing updated cluster config ...
	I1212 23:23:17.823609  127760 ssh_runner.go:195] Run: rm -f paused
	I1212 23:23:17.879192  127760 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 23:23:17.880946  127760 out.go:177] * Done! kubectl is now configured to use "embed-certs-809120" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:16:32 UTC, ends at Tue 2023-12-12 23:36:02 UTC. --
	Dec 12 23:36:01 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:01.964196239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424161964181694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=37145ae9-2b25-43ad-9113-93fd84351928 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:36:01 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:01.964985220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=beeea765-70a1-4e9f-a825-922c14843200 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:01 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:01.965180106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=beeea765-70a1-4e9f-a825-922c14843200 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:01 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:01.965655578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423064416991608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399cc9a4dae644dadba2b8d00cd6e673a4e57612d395b8119f62c9449f511811,PodSandboxId:f30e5ab7b55b51c48ab261b2eaf01f1f7191a2419449e972757d28bb41095304,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423036491351914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b53af585-9754-4561-8e28-c04e2d0d07d1,},Annotations:map[string]string{io.kubernetes.container.hash: 50ab1e41,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357,PodSandboxId:e886358cd2af18337be0a12e6ac86dad823388bf5687599b1f8cc1f52f531dd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702423034067209002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b6lz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec8ee19-e734-4792-82be-3765afc63a12,},Annotations:map[string]string{io.kubernetes.container.hash: a3c96f2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6,PodSandboxId:bbbff139fe3ae5940e55aced698f384255a83849c7cbbdd37301678312f5eeba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702423034968934884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4698s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3181b9-bbf8-431d-9b2f-45daee2289f1,},Annotations:map[string]string{io.kubernetes.container.hash: 69d8bc5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423033513317283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0
d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee,PodSandboxId:332a20be4b58e5586eec518f9dd23bbec301122f09dbb6e53e0d423bacb11e56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702423025966005276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7542d475f69aeb3071385839efe3697,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2b4a6c8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a,PodSandboxId:1dc475944d91d0f60cde385b4a25cdb9bdf30a68c9ccce2b94e3842da7e212c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702423024522391922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71,PodSandboxId:6ae6ede330e1ac4ed4f8954d6c6d5843d7d36d3bbe23e06bd35f81f813de989d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702423024202694505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1,PodSandboxId:b3165eccb7464295b1609b4032e9b53059fad6262ded0bac7ca357fa948faded,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702423024075772324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058c72f1b1a0f7dc54bd481a33984172,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3c07a316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=beeea765-70a1-4e9f-a825-922c14843200 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.006314939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a0218423-ebf8-4d5e-9572-9264913d7baf name=/runtime.v1.RuntimeService/Version
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.006372669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a0218423-ebf8-4d5e-9572-9264913d7baf name=/runtime.v1.RuntimeService/Version
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.007664660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=70573857-5795-4b34-94c0-79fe0c70cb12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.008175327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424162008161020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=70573857-5795-4b34-94c0-79fe0c70cb12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.009261027Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d9d8e5f6-9c53-4b58-9c31-951508e0a1be name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.009475013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d9d8e5f6-9c53-4b58-9c31-951508e0a1be name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.009661482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423064416991608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399cc9a4dae644dadba2b8d00cd6e673a4e57612d395b8119f62c9449f511811,PodSandboxId:f30e5ab7b55b51c48ab261b2eaf01f1f7191a2419449e972757d28bb41095304,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423036491351914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b53af585-9754-4561-8e28-c04e2d0d07d1,},Annotations:map[string]string{io.kubernetes.container.hash: 50ab1e41,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357,PodSandboxId:e886358cd2af18337be0a12e6ac86dad823388bf5687599b1f8cc1f52f531dd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702423034067209002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b6lz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec8ee19-e734-4792-82be-3765afc63a12,},Annotations:map[string]string{io.kubernetes.container.hash: a3c96f2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6,PodSandboxId:bbbff139fe3ae5940e55aced698f384255a83849c7cbbdd37301678312f5eeba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702423034968934884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4698s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3181b9-bbf8-431d-9b2f-45daee2289f1,},Annotations:map[string]string{io.kubernetes.container.hash: 69d8bc5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423033513317283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0
d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee,PodSandboxId:332a20be4b58e5586eec518f9dd23bbec301122f09dbb6e53e0d423bacb11e56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702423025966005276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7542d475f69aeb3071385839efe3697,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2b4a6c8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a,PodSandboxId:1dc475944d91d0f60cde385b4a25cdb9bdf30a68c9ccce2b94e3842da7e212c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702423024522391922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71,PodSandboxId:6ae6ede330e1ac4ed4f8954d6c6d5843d7d36d3bbe23e06bd35f81f813de989d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702423024202694505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1,PodSandboxId:b3165eccb7464295b1609b4032e9b53059fad6262ded0bac7ca357fa948faded,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702423024075772324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058c72f1b1a0f7dc54bd481a33984172,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3c07a316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d9d8e5f6-9c53-4b58-9c31-951508e0a1be name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.052395594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=926d2339-e0bd-4289-8413-643588387047 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.052457139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=926d2339-e0bd-4289-8413-643588387047 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.053764955Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=aff11472-f7a6-4a84-812c-37341425a9c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.054230050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424162054214766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=aff11472-f7a6-4a84-812c-37341425a9c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.054946607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dbe7059b-be60-4776-96a7-a31628237a65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.055019745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dbe7059b-be60-4776-96a7-a31628237a65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.055409402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423064416991608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399cc9a4dae644dadba2b8d00cd6e673a4e57612d395b8119f62c9449f511811,PodSandboxId:f30e5ab7b55b51c48ab261b2eaf01f1f7191a2419449e972757d28bb41095304,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423036491351914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b53af585-9754-4561-8e28-c04e2d0d07d1,},Annotations:map[string]string{io.kubernetes.container.hash: 50ab1e41,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357,PodSandboxId:e886358cd2af18337be0a12e6ac86dad823388bf5687599b1f8cc1f52f531dd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702423034067209002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b6lz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec8ee19-e734-4792-82be-3765afc63a12,},Annotations:map[string]string{io.kubernetes.container.hash: a3c96f2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6,PodSandboxId:bbbff139fe3ae5940e55aced698f384255a83849c7cbbdd37301678312f5eeba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702423034968934884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4698s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3181b9-bbf8-431d-9b2f-45daee2289f1,},Annotations:map[string]string{io.kubernetes.container.hash: 69d8bc5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423033513317283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0
d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee,PodSandboxId:332a20be4b58e5586eec518f9dd23bbec301122f09dbb6e53e0d423bacb11e56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702423025966005276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7542d475f69aeb3071385839efe3697,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2b4a6c8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a,PodSandboxId:1dc475944d91d0f60cde385b4a25cdb9bdf30a68c9ccce2b94e3842da7e212c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702423024522391922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71,PodSandboxId:6ae6ede330e1ac4ed4f8954d6c6d5843d7d36d3bbe23e06bd35f81f813de989d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702423024202694505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1,PodSandboxId:b3165eccb7464295b1609b4032e9b53059fad6262ded0bac7ca357fa948faded,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702423024075772324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058c72f1b1a0f7dc54bd481a33984172,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3c07a316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dbe7059b-be60-4776-96a7-a31628237a65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.093059592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6362f3db-4895-4ba9-b39c-f30a4c4e0084 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.093123319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6362f3db-4895-4ba9-b39c-f30a4c4e0084 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.094168809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f6c003ae-7f18-42aa-8302-7e178fe46077 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.094565312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424162094549131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=f6c003ae-7f18-42aa-8302-7e178fe46077 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.095080785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=86837a65-6f82-4cd5-b69f-da508aa29c29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.095152785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=86837a65-6f82-4cd5-b69f-da508aa29c29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:36:02 old-k8s-version-549640 crio[707]: time="2023-12-12 23:36:02.095360065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423064416991608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:399cc9a4dae644dadba2b8d00cd6e673a4e57612d395b8119f62c9449f511811,PodSandboxId:f30e5ab7b55b51c48ab261b2eaf01f1f7191a2419449e972757d28bb41095304,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423036491351914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b53af585-9754-4561-8e28-c04e2d0d07d1,},Annotations:map[string]string{io.kubernetes.container.hash: 50ab1e41,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357,PodSandboxId:e886358cd2af18337be0a12e6ac86dad823388bf5687599b1f8cc1f52f531dd0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1702423034067209002,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b6lz6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec8ee19-e734-4792-82be-3765afc63a12,},Annotations:map[string]string{io.kubernetes.container.hash: a3c96f2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6,PodSandboxId:bbbff139fe3ae5940e55aced698f384255a83849c7cbbdd37301678312f5eeba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1702423034968934884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4698s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf3181b9-bbf8-431d-9b2f-45daee2289f1,},Annotations:map[string]string{io.kubernetes.container.hash: 69d8bc5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pro
tocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e,PodSandboxId:a82e04d6d739035315239120225183106ad49d9470ecad4df721d2a21524e896,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423033513317283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a525a632-2304-4070-83a1-0
d4a0a995d2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fa9e7fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee,PodSandboxId:332a20be4b58e5586eec518f9dd23bbec301122f09dbb6e53e0d423bacb11e56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1702423025966005276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7542d475f69aeb3071385839efe3697,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2b4a6c8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a,PodSandboxId:1dc475944d91d0f60cde385b4a25cdb9bdf30a68c9ccce2b94e3842da7e212c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1702423024522391922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71,PodSandboxId:6ae6ede330e1ac4ed4f8954d6c6d5843d7d36d3bbe23e06bd35f81f813de989d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1702423024202694505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1,PodSandboxId:b3165eccb7464295b1609b4032e9b53059fad6262ded0bac7ca357fa948faded,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1702423024075772324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-549640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058c72f1b1a0f7dc54bd481a33984172,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3c07a316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=86837a65-6f82-4cd5-b69f-da508aa29c29 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0372df844f76c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       1                   a82e04d6d7390       storage-provisioner
	399cc9a4dae64       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   0                   f30e5ab7b55b5       busybox
	9bfcd578e7bf5       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      18 minutes ago      Running             coredns                   0                   bbbff139fe3ae       coredns-5644d7b6d9-4698s
	724f33e972a14       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      18 minutes ago      Running             kube-proxy                0                   e886358cd2af1       kube-proxy-b6lz6
	f0f94bb587a89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       0                   a82e04d6d7390       storage-provisioner
	89884a774b5b4       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   332a20be4b58e       etcd-old-k8s-version-549640
	8800e89e7fd31       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   1dc475944d91d       kube-scheduler-old-k8s-version-549640
	606ff8dc40025       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   6ae6ede330e1a       kube-controller-manager-old-k8s-version-549640
	98d3d9c460328       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   b3165eccb7464       kube-apiserver-old-k8s-version-549640
	
	* 
	* ==> coredns [9bfcd578e7bf5d5a1d20f70efb91daf02b506b2d0fe82414d7d15602ab0a00b6] <==
	* 2023-12-12T23:17:20.481Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-12-12T23:17:20.494Z [INFO] 127.0.0.1:43035 - 7884 "HINFO IN 7347994220414496808.5971008044067915772. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013683582s
	2023-12-12T23:17:25.564Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-12-12T23:17:35.564Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	I1212 23:17:45.483711       1 trace.go:82] Trace[573070858]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-12-12 23:17:15.479181525 +0000 UTC m=+0.187896749) (total time: 30.00442104s):
	Trace[573070858]: [30.00442104s] [30.00442104s] END
	E1212 23:17:45.483814       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.483814       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.483814       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1212 23:17:45.483999       1 trace.go:82] Trace[1175423538]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-12-12 23:17:15.478986785 +0000 UTC m=+0.187701963) (total time: 30.00498428s):
	Trace[1175423538]: [30.00498428s] [30.00498428s] END
	E1212 23:17:45.484051       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.484051       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.484051       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I1212 23:17:45.486408       1 trace.go:82] Trace[1090959793]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-12-12 23:17:15.485822041 +0000 UTC m=+0.194537223) (total time: 30.000565845s):
	Trace[1090959793]: [30.000565845s] [30.000565845s] END
	E1212 23:17:45.486459       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.486459       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.486459       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	2023-12-12T23:17:45.564Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	E1212 23:17:45.483814       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.484051       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E1212 23:17:45.486459       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-549640
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-549640
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=old-k8s-version-549640
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_07_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:07:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:35:41 +0000   Tue, 12 Dec 2023 23:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:35:41 +0000   Tue, 12 Dec 2023 23:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:35:41 +0000   Tue, 12 Dec 2023 23:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:35:41 +0000   Tue, 12 Dec 2023 23:17:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.146
	  Hostname:    old-k8s-version-549640
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 023b94e4f5064d3b92a8dfc25385dd75
	 System UUID:                023b94e4-f506-4d3b-92a8-dfc25385dd75
	 Boot ID:                    52e5aea4-3448-460f-97f6-e727db27da5a
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                coredns-5644d7b6d9-4698s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                etcd-old-k8s-version-549640                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-apiserver-old-k8s-version-549640             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-549640    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-proxy-b6lz6                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-scheduler-old-k8s-version-549640             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                metrics-server-74d5856cc6-hsjtz                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                kube-proxy, old-k8s-version-549640  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-549640     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet, old-k8s-version-549640     Node old-k8s-version-549640 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-549640     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kube-proxy, old-k8s-version-549640  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec12 23:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.354716] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.550543] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148326] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.463375] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.225715] systemd-fstab-generator[631]: Ignoring "noauto" for root device
	[  +0.095794] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.137505] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.116913] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.240463] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[Dec12 23:17] systemd-fstab-generator[1018]: Ignoring "noauto" for root device
	[  +0.488824] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.918632] kauditd_printk_skb: 13 callbacks suppressed
	[ +24.902231] hrtimer: interrupt took 8290587 ns
	
	* 
	* ==> etcd [89884a774b5b4c5db8aeae60740732dcba4a652a94b845ac87006421e4bf4dee] <==
	* 2023-12-12 23:17:06.417398 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-12 23:17:06.417979 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 23:17:06.418157 I | embed: listening for metrics on http://192.168.61.146:2381
	2023-12-12 23:17:06.418521 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-12 23:17:07.409700 I | raft: 52a637c8f882c7df is starting a new election at term 2
	2023-12-12 23:17:07.409976 I | raft: 52a637c8f882c7df became candidate at term 3
	2023-12-12 23:17:07.410025 I | raft: 52a637c8f882c7df received MsgVoteResp from 52a637c8f882c7df at term 3
	2023-12-12 23:17:07.410055 I | raft: 52a637c8f882c7df became leader at term 3
	2023-12-12 23:17:07.410078 I | raft: raft.node: 52a637c8f882c7df elected leader 52a637c8f882c7df at term 3
	2023-12-12 23:17:07.410408 I | etcdserver: published {Name:old-k8s-version-549640 ClientURLs:[https://192.168.61.146:2379]} to cluster a63b81a8045c22a0
	2023-12-12 23:17:07.411048 I | embed: ready to serve client requests
	2023-12-12 23:17:07.411483 I | embed: ready to serve client requests
	2023-12-12 23:17:07.412443 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 23:17:07.416588 I | embed: serving client requests on 192.168.61.146:2379
	2023-12-12 23:17:12.117229 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (118.026013ms) to execute
	2023-12-12 23:17:12.117556 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:deployment-controller\" " with result "range_response_count:1 size:495" took too long (178.987791ms) to execute
	2023-12-12 23:17:14.979219 W | etcdserver: request "header:<ID:14402384478672134807 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-549640.17a038b880c7812c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-549640.17a038b880c7812c\" value_size:402 lease:5179012441817358724 >> failure:<>>" with result "size:16" took too long (132.019764ms) to execute
	2023-12-12 23:17:15.426388 W | etcdserver: request "header:<ID:14402384478672134823 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-549640.17a038b88c60dc49\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-549640.17a038b88c60dc49\" value_size:439 lease:5179012441817358724 >> failure:<>>" with result "size:16" took too long (149.567871ms) to execute
	2023-12-12 23:17:15.437610 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:263" took too long (165.136508ms) to execute
	2023-12-12 23:17:15.446130 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:1 size:3065" took too long (173.422947ms) to execute
	2023-12-12 23:17:15.447257 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" " with result "range_response_count:1 size:928" took too long (170.171448ms) to execute
	2023-12-12 23:27:07.437350 I | mvcc: store.index: compact 822
	2023-12-12 23:27:07.439358 I | mvcc: finished scheduled compaction at 822 (took 1.31175ms)
	2023-12-12 23:32:07.457763 I | mvcc: store.index: compact 1040
	2023-12-12 23:32:07.459651 I | mvcc: finished scheduled compaction at 1040 (took 1.208898ms)
	
	* 
	* ==> kernel <==
	*  23:36:02 up 19 min,  0 users,  load average: 0.32, 0.32, 0.21
	Linux old-k8s-version-549640 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [98d3d9c460328f30542031301489bd5ce950646093abe7a97f8660310c2e2fd1] <==
	* I1212 23:28:11.901732       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:28:11.901814       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:28:11.901908       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:28:11.901919       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:30:11.902438       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:30:11.902813       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:30:11.903000       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:30:11.903033       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:32:11.904405       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:32:11.904511       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:32:11.904568       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:32:11.904575       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:33:11.904843       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:33:11.904999       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:33:11.905049       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:33:11.905057       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:35:11.905458       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1212 23:35:11.905565       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 23:35:11.905620       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:35:11.905627       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [606ff8dc4002550f756dd92f8c7da53ad9b01e468860ba301bf4ecb41de2ba71] <==
	* E1212 23:29:35.895641       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:29:44.678470       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:30:06.148159       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:30:16.680819       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:30:36.400679       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:30:48.682662       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:31:06.652741       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:31:20.685961       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:31:36.904833       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:31:52.688567       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:32:07.162331       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:32:24.690812       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:32:37.414587       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:32:56.693289       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:33:07.666491       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:33:28.695351       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:33:37.918615       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:34:00.697396       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:34:08.171441       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:34:32.699643       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:34:38.423608       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:35:04.702075       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:35:08.675653       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1212 23:35:36.704249       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1212 23:35:38.927600       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [724f33e972a1406160ad0044ce22dea8779c69d97adb4b905d165e34b5219357] <==
	* W1212 23:08:10.651579       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1212 23:08:10.659959       1 node.go:135] Successfully retrieved node IP: 192.168.61.146
	I1212 23:08:10.660042       1 server_others.go:149] Using iptables Proxier.
	I1212 23:08:10.660735       1 server.go:529] Version: v1.16.0
	I1212 23:08:10.666989       1 config.go:313] Starting service config controller
	I1212 23:08:10.667083       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1212 23:08:10.669589       1 config.go:131] Starting endpoints config controller
	I1212 23:08:10.672166       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1212 23:08:10.767692       1 shared_informer.go:204] Caches are synced for service config 
	I1212 23:08:10.772828       1 shared_informer.go:204] Caches are synced for endpoints config 
	E1212 23:09:20.956706       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=485&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp 192.168.61.146:8443: connect: connection refused
	E1212 23:09:20.957215       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=487&timeout=8m45s&timeoutSeconds=525&watch=true: dial tcp 192.168.61.146:8443: connect: connection refused
	W1212 23:17:15.701239       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1212 23:17:15.715431       1 node.go:135] Successfully retrieved node IP: 192.168.61.146
	I1212 23:17:15.715513       1 server_others.go:149] Using iptables Proxier.
	I1212 23:17:15.716205       1 server.go:529] Version: v1.16.0
	I1212 23:17:15.719063       1 config.go:313] Starting service config controller
	I1212 23:17:15.724997       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1212 23:17:15.719451       1 config.go:131] Starting endpoints config controller
	I1212 23:17:15.725237       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1212 23:17:15.825497       1 shared_informer.go:204] Caches are synced for service config 
	I1212 23:17:15.825952       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [8800e89e7fd31593c9760d94805e6961473ddd9dd3133df78eb0f0811b6ddb3a] <==
	* E1212 23:07:46.838814       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:07:46.840721       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:07:46.840844       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:07:47.834731       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:07:47.835452       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:07:47.842398       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:07:47.845176       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:07:47.846311       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 23:07:47.847011       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:07:47.848112       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:07:47.851500       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:07:47.851725       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 23:07:47.851995       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:07:47.853132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1212 23:17:05.834940       1 serving.go:319] Generated self-signed cert in-memory
	W1212 23:17:10.907694       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 23:17:10.910369       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:17:10.910752       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 23:17:10.911508       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 23:17:10.961926       1 server.go:143] Version: v1.16.0
	I1212 23:17:10.962073       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W1212 23:17:10.972790       1 authorization.go:47] Authorization is disabled
	W1212 23:17:10.972975       1 authentication.go:79] Authentication is disabled
	I1212 23:17:10.973020       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1212 23:17:10.973683       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:16:32 UTC, ends at Tue 2023-12-12 23:36:02 UTC. --
	Dec 12 23:31:25 old-k8s-version-549640 kubelet[1024]: E1212 23:31:25.151467    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:31:40 old-k8s-version-549640 kubelet[1024]: E1212 23:31:40.146779    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:31:54 old-k8s-version-549640 kubelet[1024]: E1212 23:31:54.153228    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:32:03 old-k8s-version-549640 kubelet[1024]: E1212 23:32:03.253844    1024 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 12 23:32:09 old-k8s-version-549640 kubelet[1024]: E1212 23:32:09.146930    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:32:23 old-k8s-version-549640 kubelet[1024]: E1212 23:32:23.147486    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:32:38 old-k8s-version-549640 kubelet[1024]: E1212 23:32:38.147630    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:32:49 old-k8s-version-549640 kubelet[1024]: E1212 23:32:49.147179    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:33:01 old-k8s-version-549640 kubelet[1024]: E1212 23:33:01.147597    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:33:15 old-k8s-version-549640 kubelet[1024]: E1212 23:33:15.161836    1024 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 23:33:15 old-k8s-version-549640 kubelet[1024]: E1212 23:33:15.161991    1024 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 23:33:15 old-k8s-version-549640 kubelet[1024]: E1212 23:33:15.162095    1024 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 12 23:33:15 old-k8s-version-549640 kubelet[1024]: E1212 23:33:15.162141    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 12 23:33:28 old-k8s-version-549640 kubelet[1024]: E1212 23:33:28.147705    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:33:40 old-k8s-version-549640 kubelet[1024]: E1212 23:33:40.147414    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:33:53 old-k8s-version-549640 kubelet[1024]: E1212 23:33:53.147935    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:34:06 old-k8s-version-549640 kubelet[1024]: E1212 23:34:06.146979    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:34:19 old-k8s-version-549640 kubelet[1024]: E1212 23:34:19.147197    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:34:30 old-k8s-version-549640 kubelet[1024]: E1212 23:34:30.147948    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:34:42 old-k8s-version-549640 kubelet[1024]: E1212 23:34:42.147046    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:34:54 old-k8s-version-549640 kubelet[1024]: E1212 23:34:54.147516    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:35:09 old-k8s-version-549640 kubelet[1024]: E1212 23:35:09.147171    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:35:24 old-k8s-version-549640 kubelet[1024]: E1212 23:35:24.147844    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:35:38 old-k8s-version-549640 kubelet[1024]: E1212 23:35:38.147302    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 12 23:35:49 old-k8s-version-549640 kubelet[1024]: E1212 23:35:49.148213    1024 pod_workers.go:191] Error syncing pod 0cb2ae7e-8232-4802-8552-0088be4ae16b ("metrics-server-74d5856cc6-hsjtz_kube-system(0cb2ae7e-8232-4802-8552-0088be4ae16b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [0372df844f76c9ff478409c3905a1f4d41e8f24c282f87454bb20dfc8c944015] <==
	* I1212 23:17:44.585228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:17:44.599790       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:17:44.600063       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:18:02.007530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:18:02.008133       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-549640_661390ac-3cb6-4b5c-8b0b-831df338c898!
	I1212 23:18:02.008795       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f830995-bb53-44bd-84b0-2e2877ca6bf5", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-549640_661390ac-3cb6-4b5c-8b0b-831df338c898 became leader
	I1212 23:18:02.109469       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-549640_661390ac-3cb6-4b5c-8b0b-831df338c898!
	
	* 
	* ==> storage-provisioner [f0f94bb587a894cd279b719e60b2418dd69d990e60e6cc07befd12791eec6e4e] <==
	* I1212 23:08:11.450819       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:08:11.464864       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:08:11.466359       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:08:11.478012       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:08:11.480415       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-549640_16de55e8-696f-4ceb-877e-452df6ce63d8!
	I1212 23:08:11.478350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f830995-bb53-44bd-84b0-2e2877ca6bf5", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-549640_16de55e8-696f-4ceb-877e-452df6ce63d8 became leader
	I1212 23:08:11.581901       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-549640_16de55e8-696f-4ceb-877e-452df6ce63d8!
	I1212 23:17:13.817767       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 23:17:43.826623       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-549640 -n old-k8s-version-549640
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-549640 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-hsjtz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-549640 describe pod metrics-server-74d5856cc6-hsjtz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-549640 describe pod metrics-server-74d5856cc6-hsjtz: exit status 1 (81.259987ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-hsjtz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-549640 describe pod metrics-server-74d5856cc6-hsjtz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (442.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 23:31:39.568597   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:31:53.067384   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:32:09.617030   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-12 23:38:43.428197561 +0000 UTC m=+5763.239255023
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-850839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.104µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-850839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-850839 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-850839 logs -n 25: (1.238420899s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-549640        | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-115023             | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850839  | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-809120                 | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-549640             | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-115023                  | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850839       | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:22 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-549640 image                           | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	| delete  | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	| start   | -p newest-cni-439645 --memory=2200 --alsologtostderr   | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:37 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-439645             | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-439645                                   | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:37 UTC |
	| delete  | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:37 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:36:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:36:08.204541  133802 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:36:08.204725  133802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:36:08.204739  133802 out.go:309] Setting ErrFile to fd 2...
	I1212 23:36:08.204747  133802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:36:08.204988  133802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:36:08.205710  133802 out.go:303] Setting JSON to false
	I1212 23:36:08.206770  133802 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":15522,"bootTime":1702408646,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:36:08.206855  133802 start.go:138] virtualization: kvm guest
	I1212 23:36:08.209439  133802 out.go:177] * [newest-cni-439645] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:36:08.211502  133802 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:36:08.211521  133802 notify.go:220] Checking for updates...
	I1212 23:36:08.213376  133802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:36:08.215409  133802 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:36:08.216961  133802 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.218748  133802 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:36:08.220434  133802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:36:08.222483  133802 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:36:08.222602  133802 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:36:08.222741  133802 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:36:08.222896  133802 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:36:08.264663  133802 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 23:36:08.266329  133802 start.go:298] selected driver: kvm2
	I1212 23:36:08.266348  133802 start.go:902] validating driver "kvm2" against <nil>
	I1212 23:36:08.266361  133802 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:36:08.267078  133802 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:36:08.267184  133802 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:36:08.283689  133802 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:36:08.283747  133802 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1212 23:36:08.283771  133802 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 23:36:08.284034  133802 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 23:36:08.284128  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:08.284147  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:08.284162  133802 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 23:36:08.284179  133802 start_flags.go:323] config:
	{Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:36:08.284346  133802 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:36:08.287125  133802 out.go:177] * Starting control plane node newest-cni-439645 in cluster newest-cni-439645
	I1212 23:36:08.288784  133802 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:36:08.288841  133802 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 23:36:08.288851  133802 cache.go:56] Caching tarball of preloaded images
	I1212 23:36:08.288949  133802 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:36:08.288960  133802 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 23:36:08.289063  133802 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json ...
	I1212 23:36:08.289080  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json: {Name:mk0bbd2ffb05d360736a6f4129d836fbd45c7eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:08.289214  133802 start.go:365] acquiring machines lock for newest-cni-439645: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:36:08.289242  133802 start.go:369] acquired machines lock for "newest-cni-439645" in 15.176µs
	I1212 23:36:08.289256  133802 start.go:93] Provisioning new machine with config: &{Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:36:08.289315  133802 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 23:36:08.291357  133802 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:36:08.291571  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:36:08.291628  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:36:08.306775  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43309
	I1212 23:36:08.307294  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:36:08.307903  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:36:08.307928  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:36:08.308314  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:36:08.308532  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:08.308701  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:08.308899  133802 start.go:159] libmachine.API.Create for "newest-cni-439645" (driver="kvm2")
	I1212 23:36:08.308956  133802 client.go:168] LocalClient.Create starting
	I1212 23:36:08.308999  133802 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem
	I1212 23:36:08.309054  133802 main.go:141] libmachine: Decoding PEM data...
	I1212 23:36:08.309078  133802 main.go:141] libmachine: Parsing certificate...
	I1212 23:36:08.309151  133802 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem
	I1212 23:36:08.309179  133802 main.go:141] libmachine: Decoding PEM data...
	I1212 23:36:08.309202  133802 main.go:141] libmachine: Parsing certificate...
	I1212 23:36:08.309228  133802 main.go:141] libmachine: Running pre-create checks...
	I1212 23:36:08.309247  133802 main.go:141] libmachine: (newest-cni-439645) Calling .PreCreateCheck
	I1212 23:36:08.309626  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:08.310057  133802 main.go:141] libmachine: Creating machine...
	I1212 23:36:08.310077  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Create
	I1212 23:36:08.310248  133802 main.go:141] libmachine: (newest-cni-439645) Creating KVM machine...
	I1212 23:36:08.311800  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found existing default KVM network
	I1212 23:36:08.313218  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.313045  133824 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:15:cd} reservation:<nil>}
	I1212 23:36:08.314157  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.314049  133824 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2c:8d:63} reservation:<nil>}
	I1212 23:36:08.315202  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.315115  133824 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027f0b0}
	I1212 23:36:08.321629  133802 main.go:141] libmachine: (newest-cni-439645) DBG | trying to create private KVM network mk-newest-cni-439645 192.168.61.0/24...
	I1212 23:36:08.412322  133802 main.go:141] libmachine: (newest-cni-439645) DBG | private KVM network mk-newest-cni-439645 192.168.61.0/24 created
	I1212 23:36:08.412362  133802 main.go:141] libmachine: (newest-cni-439645) Setting up store path in /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 ...
	I1212 23:36:08.412377  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.412288  133824 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.412449  133802 main.go:141] libmachine: (newest-cni-439645) Building disk image from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 23:36:08.412478  133802 main.go:141] libmachine: (newest-cni-439645) Downloading /home/jenkins/minikube-integration/17761-76611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:36:08.659216  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.659046  133824 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa...
	I1212 23:36:08.751801  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.751633  133824 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/newest-cni-439645.rawdisk...
	I1212 23:36:08.751839  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Writing magic tar header
	I1212 23:36:08.751858  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Writing SSH key tar header
	I1212 23:36:08.751867  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.751794  133824 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 ...
	I1212 23:36:08.751953  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645
	I1212 23:36:08.751980  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines
	I1212 23:36:08.751993  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.752011  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 (perms=drwx------)
	I1212 23:36:08.752026  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611
	I1212 23:36:08.752042  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 23:36:08.752056  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins
	I1212 23:36:08.752072  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines (perms=drwxr-xr-x)
	I1212 23:36:08.752091  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home
	I1212 23:36:08.752100  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube (perms=drwxr-xr-x)
	I1212 23:36:08.752106  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Skipping /home - not owner
	I1212 23:36:08.752122  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611 (perms=drwxrwxr-x)
	I1212 23:36:08.752135  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 23:36:08.752151  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 23:36:08.752174  133802 main.go:141] libmachine: (newest-cni-439645) Creating domain...
	I1212 23:36:08.753454  133802 main.go:141] libmachine: (newest-cni-439645) define libvirt domain using xml: 
	I1212 23:36:08.753486  133802 main.go:141] libmachine: (newest-cni-439645) <domain type='kvm'>
	I1212 23:36:08.753499  133802 main.go:141] libmachine: (newest-cni-439645)   <name>newest-cni-439645</name>
	I1212 23:36:08.753515  133802 main.go:141] libmachine: (newest-cni-439645)   <memory unit='MiB'>2200</memory>
	I1212 23:36:08.753526  133802 main.go:141] libmachine: (newest-cni-439645)   <vcpu>2</vcpu>
	I1212 23:36:08.753537  133802 main.go:141] libmachine: (newest-cni-439645)   <features>
	I1212 23:36:08.753546  133802 main.go:141] libmachine: (newest-cni-439645)     <acpi/>
	I1212 23:36:08.753562  133802 main.go:141] libmachine: (newest-cni-439645)     <apic/>
	I1212 23:36:08.753575  133802 main.go:141] libmachine: (newest-cni-439645)     <pae/>
	I1212 23:36:08.753585  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.753591  133802 main.go:141] libmachine: (newest-cni-439645)   </features>
	I1212 23:36:08.753599  133802 main.go:141] libmachine: (newest-cni-439645)   <cpu mode='host-passthrough'>
	I1212 23:36:08.753605  133802 main.go:141] libmachine: (newest-cni-439645)   
	I1212 23:36:08.753616  133802 main.go:141] libmachine: (newest-cni-439645)   </cpu>
	I1212 23:36:08.753624  133802 main.go:141] libmachine: (newest-cni-439645)   <os>
	I1212 23:36:08.753632  133802 main.go:141] libmachine: (newest-cni-439645)     <type>hvm</type>
	I1212 23:36:08.753645  133802 main.go:141] libmachine: (newest-cni-439645)     <boot dev='cdrom'/>
	I1212 23:36:08.753656  133802 main.go:141] libmachine: (newest-cni-439645)     <boot dev='hd'/>
	I1212 23:36:08.753684  133802 main.go:141] libmachine: (newest-cni-439645)     <bootmenu enable='no'/>
	I1212 23:36:08.753714  133802 main.go:141] libmachine: (newest-cni-439645)   </os>
	I1212 23:36:08.753743  133802 main.go:141] libmachine: (newest-cni-439645)   <devices>
	I1212 23:36:08.753761  133802 main.go:141] libmachine: (newest-cni-439645)     <disk type='file' device='cdrom'>
	I1212 23:36:08.753777  133802 main.go:141] libmachine: (newest-cni-439645)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/boot2docker.iso'/>
	I1212 23:36:08.753786  133802 main.go:141] libmachine: (newest-cni-439645)       <target dev='hdc' bus='scsi'/>
	I1212 23:36:08.753793  133802 main.go:141] libmachine: (newest-cni-439645)       <readonly/>
	I1212 23:36:08.753804  133802 main.go:141] libmachine: (newest-cni-439645)     </disk>
	I1212 23:36:08.753813  133802 main.go:141] libmachine: (newest-cni-439645)     <disk type='file' device='disk'>
	I1212 23:36:08.753820  133802 main.go:141] libmachine: (newest-cni-439645)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 23:36:08.753831  133802 main.go:141] libmachine: (newest-cni-439645)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/newest-cni-439645.rawdisk'/>
	I1212 23:36:08.753837  133802 main.go:141] libmachine: (newest-cni-439645)       <target dev='hda' bus='virtio'/>
	I1212 23:36:08.753845  133802 main.go:141] libmachine: (newest-cni-439645)     </disk>
	I1212 23:36:08.753853  133802 main.go:141] libmachine: (newest-cni-439645)     <interface type='network'>
	I1212 23:36:08.753868  133802 main.go:141] libmachine: (newest-cni-439645)       <source network='mk-newest-cni-439645'/>
	I1212 23:36:08.753876  133802 main.go:141] libmachine: (newest-cni-439645)       <model type='virtio'/>
	I1212 23:36:08.753882  133802 main.go:141] libmachine: (newest-cni-439645)     </interface>
	I1212 23:36:08.753890  133802 main.go:141] libmachine: (newest-cni-439645)     <interface type='network'>
	I1212 23:36:08.753896  133802 main.go:141] libmachine: (newest-cni-439645)       <source network='default'/>
	I1212 23:36:08.753904  133802 main.go:141] libmachine: (newest-cni-439645)       <model type='virtio'/>
	I1212 23:36:08.753910  133802 main.go:141] libmachine: (newest-cni-439645)     </interface>
	I1212 23:36:08.753920  133802 main.go:141] libmachine: (newest-cni-439645)     <serial type='pty'>
	I1212 23:36:08.753927  133802 main.go:141] libmachine: (newest-cni-439645)       <target port='0'/>
	I1212 23:36:08.753932  133802 main.go:141] libmachine: (newest-cni-439645)     </serial>
	I1212 23:36:08.753977  133802 main.go:141] libmachine: (newest-cni-439645)     <console type='pty'>
	I1212 23:36:08.754007  133802 main.go:141] libmachine: (newest-cni-439645)       <target type='serial' port='0'/>
	I1212 23:36:08.754026  133802 main.go:141] libmachine: (newest-cni-439645)     </console>
	I1212 23:36:08.754043  133802 main.go:141] libmachine: (newest-cni-439645)     <rng model='virtio'>
	I1212 23:36:08.754070  133802 main.go:141] libmachine: (newest-cni-439645)       <backend model='random'>/dev/random</backend>
	I1212 23:36:08.754082  133802 main.go:141] libmachine: (newest-cni-439645)     </rng>
	I1212 23:36:08.754095  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.754112  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.754132  133802 main.go:141] libmachine: (newest-cni-439645)   </devices>
	I1212 23:36:08.754150  133802 main.go:141] libmachine: (newest-cni-439645) </domain>
	I1212 23:36:08.754167  133802 main.go:141] libmachine: (newest-cni-439645) 
	I1212 23:36:08.759409  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:46:23:5d in network default
	I1212 23:36:08.760150  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring networks are active...
	I1212 23:36:08.760186  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:08.760936  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring network default is active
	I1212 23:36:08.761269  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring network mk-newest-cni-439645 is active
	I1212 23:36:08.761910  133802 main.go:141] libmachine: (newest-cni-439645) Getting domain xml...
	I1212 23:36:08.762809  133802 main.go:141] libmachine: (newest-cni-439645) Creating domain...
	I1212 23:36:10.109571  133802 main.go:141] libmachine: (newest-cni-439645) Waiting to get IP...
	I1212 23:36:10.110345  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.110871  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.110904  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.110828  133824 retry.go:31] will retry after 212.086514ms: waiting for machine to come up
	I1212 23:36:10.325657  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.326256  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.326288  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.326191  133824 retry.go:31] will retry after 381.394576ms: waiting for machine to come up
	I1212 23:36:10.708787  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.709308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.709338  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.709263  133824 retry.go:31] will retry after 454.077778ms: waiting for machine to come up
	I1212 23:36:11.164751  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:11.165360  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:11.165396  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:11.165317  133824 retry.go:31] will retry after 398.894065ms: waiting for machine to come up
	I1212 23:36:11.565921  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:11.566445  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:11.566480  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:11.566380  133824 retry.go:31] will retry after 617.446132ms: waiting for machine to come up
	I1212 23:36:12.185273  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:12.185806  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:12.185841  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:12.185709  133824 retry.go:31] will retry after 850.635578ms: waiting for machine to come up
	I1212 23:36:13.037840  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:13.038356  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:13.038389  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:13.038282  133824 retry.go:31] will retry after 1.002335455s: waiting for machine to come up
	I1212 23:36:14.042954  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:14.043504  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:14.043545  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:14.043463  133824 retry.go:31] will retry after 1.341938926s: waiting for machine to come up
	I1212 23:36:15.387072  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:15.387591  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:15.387635  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:15.387529  133824 retry.go:31] will retry after 1.597064845s: waiting for machine to come up
	I1212 23:36:16.986295  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:16.986840  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:16.986871  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:16.986765  133824 retry.go:31] will retry after 1.571135704s: waiting for machine to come up
	I1212 23:36:18.559590  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:18.560165  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:18.560212  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:18.560084  133824 retry.go:31] will retry after 2.078148594s: waiting for machine to come up
	I1212 23:36:20.641150  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:20.641588  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:20.641620  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:20.641527  133824 retry.go:31] will retry after 3.259272182s: waiting for machine to come up
	I1212 23:36:23.902961  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:23.903396  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:23.903419  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:23.903368  133824 retry.go:31] will retry after 4.378786206s: waiting for machine to come up
	I1212 23:36:28.286837  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:28.287251  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:28.287284  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:28.287188  133824 retry.go:31] will retry after 3.993578265s: waiting for machine to come up
	I1212 23:36:32.284308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.284709  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has current primary IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.284739  133802 main.go:141] libmachine: (newest-cni-439645) Found IP for machine: 192.168.61.126
	I1212 23:36:32.284753  133802 main.go:141] libmachine: (newest-cni-439645) Reserving static IP address...
	I1212 23:36:32.285063  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find host DHCP lease matching {name: "newest-cni-439645", mac: "52:54:00:99:10:d4", ip: "192.168.61.126"} in network mk-newest-cni-439645
	I1212 23:36:32.365787  133802 main.go:141] libmachine: (newest-cni-439645) Reserved static IP address: 192.168.61.126
	I1212 23:36:32.365863  133802 main.go:141] libmachine: (newest-cni-439645) Waiting for SSH to be available...
	I1212 23:36:32.365878  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Getting to WaitForSSH function...
	I1212 23:36:32.368389  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.368825  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.368856  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.368999  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using SSH client type: external
	I1212 23:36:32.369031  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa (-rw-------)
	I1212 23:36:32.369079  133802 main.go:141] libmachine: (newest-cni-439645) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:36:32.369095  133802 main.go:141] libmachine: (newest-cni-439645) DBG | About to run SSH command:
	I1212 23:36:32.369110  133802 main.go:141] libmachine: (newest-cni-439645) DBG | exit 0
	I1212 23:36:32.463168  133802 main.go:141] libmachine: (newest-cni-439645) DBG | SSH cmd err, output: <nil>: 
	I1212 23:36:32.463437  133802 main.go:141] libmachine: (newest-cni-439645) KVM machine creation complete!
	I1212 23:36:32.463806  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:32.464520  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:32.464754  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:32.464947  133802 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 23:36:32.464967  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:36:32.466474  133802 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 23:36:32.466493  133802 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 23:36:32.466500  133802 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 23:36:32.466506  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.469172  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.469553  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.469586  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.469718  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.469925  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.470103  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.470247  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.470448  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.470816  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.470836  133802 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 23:36:32.594684  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:36:32.594730  133802 main.go:141] libmachine: Detecting the provisioner...
	I1212 23:36:32.594745  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.597756  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.598098  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.598124  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.598250  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.598474  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.598645  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.598802  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.599050  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.599474  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.599494  133802 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 23:36:32.724121  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g161fa11-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 23:36:32.724215  133802 main.go:141] libmachine: found compatible host: buildroot
	I1212 23:36:32.724226  133802 main.go:141] libmachine: Provisioning with buildroot...
	I1212 23:36:32.724236  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:32.724485  133802 buildroot.go:166] provisioning hostname "newest-cni-439645"
	I1212 23:36:32.724519  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:32.724731  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.727301  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.727695  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.727739  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.727904  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.728108  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.728272  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.728398  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.728575  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.728902  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.728919  133802 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-439645 && echo "newest-cni-439645" | sudo tee /etc/hostname
	I1212 23:36:32.869225  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-439645
	
	I1212 23:36:32.869262  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.872305  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.872650  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.872683  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.872833  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.873037  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.873268  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.873467  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.873669  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.873997  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.874022  133802 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-439645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-439645/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-439645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:36:33.016660  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:36:33.016698  133802 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:36:33.016737  133802 buildroot.go:174] setting up certificates
	I1212 23:36:33.016752  133802 provision.go:83] configureAuth start
	I1212 23:36:33.016772  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:33.017098  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.020073  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.020451  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.020482  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.020593  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.022775  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.023111  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.023146  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.023260  133802 provision.go:138] copyHostCerts
	I1212 23:36:33.023320  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:36:33.023355  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:36:33.023426  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:36:33.023580  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:36:33.023595  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:36:33.023662  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:36:33.023751  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:36:33.023763  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:36:33.023794  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:36:33.023890  133802 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.newest-cni-439645 san=[192.168.61.126 192.168.61.126 localhost 127.0.0.1 minikube newest-cni-439645]
	I1212 23:36:33.130713  133802 provision.go:172] copyRemoteCerts
	I1212 23:36:33.130786  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:36:33.130811  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.133674  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.134044  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.134077  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.134252  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.134463  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.134630  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.134791  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.229111  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:36:33.253806  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 23:36:33.279194  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:36:33.305549  133802 provision.go:86] duration metric: configureAuth took 288.773724ms
	I1212 23:36:33.305584  133802 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:36:33.305828  133802 config.go:182] Loaded profile config "newest-cni-439645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:36:33.305928  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.309007  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.309393  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.309442  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.309685  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.309905  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.310082  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.310269  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.310522  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:33.310969  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:33.311001  133802 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:36:33.659022  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:36:33.659053  133802 main.go:141] libmachine: Checking connection to Docker...
	I1212 23:36:33.659062  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetURL
	I1212 23:36:33.660336  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using libvirt version 6000000
	I1212 23:36:33.662825  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.663254  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.663328  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.663521  133802 main.go:141] libmachine: Docker is up and running!
	I1212 23:36:33.663541  133802 main.go:141] libmachine: Reticulating splines...
	I1212 23:36:33.663551  133802 client.go:171] LocalClient.Create took 25.354580567s
	I1212 23:36:33.663576  133802 start.go:167] duration metric: libmachine.API.Create for "newest-cni-439645" took 25.354681666s
	I1212 23:36:33.663587  133802 start.go:300] post-start starting for "newest-cni-439645" (driver="kvm2")
	I1212 23:36:33.663598  133802 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:36:33.663621  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.663956  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:36:33.663990  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.666473  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.666820  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.666853  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.667024  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.667278  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.667455  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.667634  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.761451  133802 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:36:33.766167  133802 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:36:33.766204  133802 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:36:33.766276  133802 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:36:33.766346  133802 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:36:33.766431  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:36:33.775657  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:36:33.801999  133802 start.go:303] post-start completed in 138.398519ms
	I1212 23:36:33.802063  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:33.802819  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.806048  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.806506  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.806541  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.806879  133802 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json ...
	I1212 23:36:33.807132  133802 start.go:128] duration metric: createHost completed in 25.517805954s
	I1212 23:36:33.807166  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.810015  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.810489  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.810523  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.810700  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.810949  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.811121  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.811266  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.811478  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:33.811830  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:33.811843  133802 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:36:33.940600  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702424193.918938184
	
	I1212 23:36:33.940623  133802 fix.go:206] guest clock: 1702424193.918938184
	I1212 23:36:33.940630  133802 fix.go:219] Guest: 2023-12-12 23:36:33.918938184 +0000 UTC Remote: 2023-12-12 23:36:33.807148127 +0000 UTC m=+25.658409212 (delta=111.790057ms)
	I1212 23:36:33.940685  133802 fix.go:190] guest clock delta is within tolerance: 111.790057ms
	I1212 23:36:33.940696  133802 start.go:83] releasing machines lock for "newest-cni-439645", held for 25.651447824s
	I1212 23:36:33.940720  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.941043  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.944022  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.944345  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.944380  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.944480  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945025  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945203  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945298  133802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:36:33.945360  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.945440  133802 ssh_runner.go:195] Run: cat /version.json
	I1212 23:36:33.945462  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.948277  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948626  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.948657  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948688  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.948706  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948786  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.948902  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.949007  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.949229  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.949244  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.949425  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.949501  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.949585  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:34.042106  133802 ssh_runner.go:195] Run: systemctl --version
	I1212 23:36:34.067295  133802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:36:34.232321  133802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:36:34.239110  133802 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:36:34.239193  133802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:36:34.255820  133802 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:36:34.255846  133802 start.go:475] detecting cgroup driver to use...
	I1212 23:36:34.255922  133802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:36:34.270214  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:36:34.282323  133802 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:36:34.282395  133802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:36:34.295221  133802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:36:34.307456  133802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:36:34.424362  133802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:36:34.550588  133802 docker.go:219] disabling docker service ...
	I1212 23:36:34.550666  133802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:36:34.564243  133802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:36:34.576734  133802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:36:34.685970  133802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:36:34.806980  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:36:34.822300  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:36:34.841027  133802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:36:34.841096  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.850879  133802 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:36:34.850961  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.860041  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.869086  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.879476  133802 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:36:34.889070  133802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:36:34.897768  133802 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:36:34.897820  133802 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:36:34.911555  133802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:36:34.920330  133802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:36:35.030591  133802 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:36:35.201172  133802 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:36:35.201259  133802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:36:35.207456  133802 start.go:543] Will wait 60s for crictl version
	I1212 23:36:35.207528  133802 ssh_runner.go:195] Run: which crictl
	I1212 23:36:35.211996  133802 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:36:35.258165  133802 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:36:35.258298  133802 ssh_runner.go:195] Run: crio --version
	I1212 23:36:35.308715  133802 ssh_runner.go:195] Run: crio --version
	I1212 23:36:35.361717  133802 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 23:36:35.363160  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:35.365887  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:35.366260  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:35.366297  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:35.366516  133802 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 23:36:35.370757  133802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:36:35.385019  133802 localpath.go:92] copying /home/jenkins/minikube-integration/17761-76611/.minikube/client.crt -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.crt
	I1212 23:36:35.385199  133802 localpath.go:117] copying /home/jenkins/minikube-integration/17761-76611/.minikube/client.key -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.key
	I1212 23:36:35.387269  133802 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 23:36:35.388849  133802 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:36:35.388917  133802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:36:35.425861  133802 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 23:36:35.425931  133802 ssh_runner.go:195] Run: which lz4
	I1212 23:36:35.430186  133802 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:36:35.434663  133802 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:36:35.434700  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401739178 bytes)
	I1212 23:36:37.061349  133802 crio.go:444] Took 1.631183 seconds to copy over tarball
	I1212 23:36:37.061464  133802 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:36:39.637255  133802 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575759736s)
	I1212 23:36:39.637291  133802 crio.go:451] Took 2.575898 seconds to extract the tarball
	I1212 23:36:39.637303  133802 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:36:39.677494  133802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:36:39.766035  133802 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:36:39.766059  133802 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:36:39.766189  133802 ssh_runner.go:195] Run: crio config
	I1212 23:36:39.833582  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:39.833614  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:39.833641  133802 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1212 23:36:39.833669  133802 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-439645 NodeName:newest-cni-439645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:36:39.833860  133802 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-439645"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:36:39.833987  133802 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-439645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:36:39.834070  133802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 23:36:39.844985  133802 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:36:39.845069  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:36:39.856286  133802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1212 23:36:39.874343  133802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 23:36:39.892075  133802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1212 23:36:39.912183  133802 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I1212 23:36:39.916201  133802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:36:39.928092  133802 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645 for IP: 192.168.61.126
	I1212 23:36:39.928126  133802 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:39.928286  133802 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:36:39.928341  133802 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:36:39.928452  133802 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.key
	I1212 23:36:39.928484  133802 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a
	I1212 23:36:39.928502  133802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a with IP's: [192.168.61.126 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:36:40.086731  133802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a ...
	I1212 23:36:40.086762  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a: {Name:mk84eb32c33b5eeb3ae8582be9a9ef465e3ffdf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.086933  133802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a ...
	I1212 23:36:40.086947  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a: {Name:mk3956332b5f04ef30f2f27bb7fd660cd7454547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.087015  133802 certs.go:337] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt
	I1212 23:36:40.087074  133802 certs.go:341] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key
	I1212 23:36:40.087122  133802 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key
	I1212 23:36:40.087136  133802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt with IP's: []
	I1212 23:36:40.232772  133802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt ...
	I1212 23:36:40.232814  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt: {Name:mk67436a71f51ed921cc97aac7a15bc922b20637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.232974  133802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key ...
	I1212 23:36:40.232993  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key: {Name:mk50b06404e6bba8454342d2a726ff327c0cec64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.233148  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:36:40.233183  133802 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:36:40.233191  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:36:40.233214  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:36:40.233239  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:36:40.233260  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:36:40.233303  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:36:40.233973  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:36:40.258898  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:36:40.281796  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:36:40.306279  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:36:40.330350  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:36:40.354800  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:36:40.381255  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:36:40.406517  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:36:40.432781  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:36:40.457098  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:36:40.481498  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:36:40.505394  133802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:36:40.521926  133802 ssh_runner.go:195] Run: openssl version
	I1212 23:36:40.527951  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:36:40.539530  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.544979  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.545055  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.551367  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:36:40.562238  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:36:40.573393  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.578071  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.578118  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.583694  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:36:40.594194  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:36:40.605923  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.610736  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.610801  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.616424  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:36:40.627019  133802 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:36:40.631732  133802 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:36:40.631791  133802 kubeadm.go:404] StartCluster: {Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:36:40.631885  133802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:36:40.631927  133802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:36:40.674482  133802 cri.go:89] found id: ""
	I1212 23:36:40.674562  133802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:36:40.685198  133802 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:36:40.697550  133802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:36:40.709344  133802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:36:40.709392  133802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:36:40.826867  133802 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 23:36:40.826985  133802 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:36:41.099627  133802 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:36:41.099767  133802 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:36:41.099941  133802 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:36:41.379678  133802 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:36:41.472054  133802 out.go:204]   - Generating certificates and keys ...
	I1212 23:36:41.472187  133802 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:36:41.472277  133802 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:36:41.814537  133802 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:36:42.030569  133802 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:36:42.212440  133802 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:36:42.408531  133802 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:36:42.596187  133802 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:36:42.596349  133802 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-439645] and IPs [192.168.61.126 127.0.0.1 ::1]
	I1212 23:36:42.835041  133802 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:36:42.835264  133802 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-439645] and IPs [192.168.61.126 127.0.0.1 ::1]
	I1212 23:36:42.965385  133802 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:36:43.404498  133802 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:36:43.484084  133802 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:36:43.484419  133802 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:36:43.679972  133802 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:36:43.897206  133802 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 23:36:44.144262  133802 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:36:44.342356  133802 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:36:44.516812  133802 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:36:44.517253  133802 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:36:44.520476  133802 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:36:44.522075  133802 out.go:204]   - Booting up control plane ...
	I1212 23:36:44.522243  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:36:44.522359  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:36:44.522885  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:36:44.540187  133802 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:36:44.541123  133802 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:36:44.541258  133802 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:36:44.679611  133802 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:36:52.682546  133802 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005372 seconds
	I1212 23:36:52.699907  133802 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:36:52.716862  133802 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:36:53.253165  133802 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:36:53.253380  133802 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-439645 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:36:53.769781  133802 kubeadm.go:322] [bootstrap-token] Using token: v2icbq.kf108uw3b7rzt7qu
	I1212 23:36:53.771411  133802 out.go:204]   - Configuring RBAC rules ...
	I1212 23:36:53.771550  133802 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:36:53.785275  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:36:53.797359  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:36:53.803152  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:36:53.809783  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:36:53.822923  133802 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:36:53.839260  133802 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:36:54.108016  133802 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:36:54.206851  133802 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:36:54.207818  133802 kubeadm.go:322] 
	I1212 23:36:54.207927  133802 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:36:54.207951  133802 kubeadm.go:322] 
	I1212 23:36:54.208061  133802 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:36:54.208073  133802 kubeadm.go:322] 
	I1212 23:36:54.208106  133802 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:36:54.208190  133802 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:36:54.208263  133802 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:36:54.208272  133802 kubeadm.go:322] 
	I1212 23:36:54.208379  133802 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:36:54.208391  133802 kubeadm.go:322] 
	I1212 23:36:54.208444  133802 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:36:54.208465  133802 kubeadm.go:322] 
	I1212 23:36:54.208548  133802 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:36:54.208620  133802 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:36:54.208718  133802 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:36:54.208731  133802 kubeadm.go:322] 
	I1212 23:36:54.208872  133802 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:36:54.208985  133802 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:36:54.209004  133802 kubeadm.go:322] 
	I1212 23:36:54.209130  133802 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v2icbq.kf108uw3b7rzt7qu \
	I1212 23:36:54.209285  133802 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:36:54.209329  133802 kubeadm.go:322] 	--control-plane 
	I1212 23:36:54.209345  133802 kubeadm.go:322] 
	I1212 23:36:54.209421  133802 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:36:54.209437  133802 kubeadm.go:322] 
	I1212 23:36:54.209512  133802 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v2icbq.kf108uw3b7rzt7qu \
	I1212 23:36:54.209612  133802 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:36:54.210259  133802 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:36:54.210302  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:54.210325  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:54.212174  133802 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:36:54.213539  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:36:54.245016  133802 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:36:54.272864  133802 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:36:54.272931  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.272968  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=newest-cni-439645 minikube.k8s.io/updated_at=2023_12_12T23_36_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.379602  133802 ops.go:34] apiserver oom_adj: -16
	I1212 23:36:54.633228  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.739769  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:55.351504  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:55.851143  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:56.351206  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:56.851623  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:57.351305  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:57.851799  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:58.351559  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:58.850971  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:59.351009  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:59.851133  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:00.351899  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:00.851352  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:01.351666  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:01.851015  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:02.351018  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:02.850982  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:03.351233  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:03.851881  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:04.351793  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:04.851071  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:05.351199  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:05.851905  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:06.014064  133802 kubeadm.go:1088] duration metric: took 11.741191092s to wait for elevateKubeSystemPrivileges.
	I1212 23:37:06.014111  133802 kubeadm.go:406] StartCluster complete in 25.38232392s
	I1212 23:37:06.014140  133802 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:37:06.014240  133802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:37:06.016995  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:37:06.017348  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:37:06.017516  133802 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:37:06.017602  133802 config.go:182] Loaded profile config "newest-cni-439645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:37:06.017622  133802 addons.go:69] Setting default-storageclass=true in profile "newest-cni-439645"
	I1212 23:37:06.017648  133802 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-439645"
	I1212 23:37:06.017602  133802 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-439645"
	I1212 23:37:06.017663  133802 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-439645"
	I1212 23:37:06.017718  133802 host.go:66] Checking if "newest-cni-439645" exists ...
	I1212 23:37:06.018178  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.018192  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.018229  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.018346  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.035695  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I1212 23:37:06.035705  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41013
	I1212 23:37:06.036249  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.036338  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.036854  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.036875  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.037009  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.037027  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.037420  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.037460  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.037591  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.038135  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.038185  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.041698  133802 addons.go:231] Setting addon default-storageclass=true in "newest-cni-439645"
	I1212 23:37:06.041755  133802 host.go:66] Checking if "newest-cni-439645" exists ...
	I1212 23:37:06.042232  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.042290  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.054882  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I1212 23:37:06.055360  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.055977  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.056004  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.056361  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.056554  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.057908  133802 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-439645" context rescaled to 1 replicas
	I1212 23:37:06.057952  133802 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:37:06.061829  133802 out.go:177] * Verifying Kubernetes components...
	I1212 23:37:06.058630  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:37:06.063335  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34507
	I1212 23:37:06.063863  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:37:06.064075  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.065638  133802 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:37:06.064514  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.067314  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.067432  133802 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:37:06.067459  133802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:37:06.067484  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:37:06.067889  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.068492  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.068544  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.071038  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.071327  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:37:06.071427  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.071615  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:37:06.071835  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:37:06.071952  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:37:06.072334  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:37:06.086051  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 23:37:06.086557  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.087406  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.087422  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.087816  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.088051  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.089796  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:37:06.090067  133802 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:37:06.090086  133802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:37:06.090116  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:37:06.093365  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.093786  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:37:06.093828  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.093952  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:37:06.094153  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:37:06.094345  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:37:06.094511  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:37:06.207823  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:37:06.209786  133802 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:37:06.209852  133802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:37:06.224108  133802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:37:06.279819  133802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:37:06.792429  133802 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 23:37:06.792532  133802 api_server.go:72] duration metric: took 734.539895ms to wait for apiserver process to appear ...
	I1212 23:37:06.792561  133802 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:37:06.792582  133802 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I1212 23:37:06.801498  133802 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I1212 23:37:06.813707  133802 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:37:06.813743  133802 api_server.go:131] duration metric: took 21.176357ms to wait for apiserver health ...
	I1212 23:37:06.813754  133802 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:37:06.842793  133802 system_pods.go:59] 5 kube-system pods found
	I1212 23:37:06.842831  133802 system_pods.go:61] "etcd-newest-cni-439645" [7568458a-44a4-460a-8f19-50a0b12ce47e] Running
	I1212 23:37:06.842837  133802 system_pods.go:61] "kube-apiserver-newest-cni-439645" [be37172f-a2c6-43f0-ba6f-026b57424206] Running
	I1212 23:37:06.842841  133802 system_pods.go:61] "kube-controller-manager-newest-cni-439645" [949056cc-9959-4160-bf82-bc9e3afbd86f] Running
	I1212 23:37:06.842845  133802 system_pods.go:61] "kube-proxy-9jtg7" [3c4c2367-6254-4d81-83f0-054b4d33515b] Pending
	I1212 23:37:06.842849  133802 system_pods.go:61] "kube-scheduler-newest-cni-439645" [64a5920a-0055-457c-8f06-e81450e5d8af] Running
	I1212 23:37:06.842858  133802 system_pods.go:74] duration metric: took 29.095739ms to wait for pod list to return data ...
	I1212 23:37:06.842869  133802 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:37:06.860986  133802 default_sa.go:45] found service account: "default"
	I1212 23:37:06.861034  133802 default_sa.go:55] duration metric: took 18.151161ms for default service account to be created ...
	I1212 23:37:06.861049  133802 kubeadm.go:581] duration metric: took 803.062192ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1212 23:37:06.861071  133802 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:37:06.874325  133802 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:37:06.874362  133802 node_conditions.go:123] node cpu capacity is 2
	I1212 23:37:06.874378  133802 node_conditions.go:105] duration metric: took 13.301256ms to run NodePressure ...
	I1212 23:37:06.874393  133802 start.go:228] waiting for startup goroutines ...
	I1212 23:37:07.110459  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.110492  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.110568  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.110646  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.112440  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.112454  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.112476  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.112487  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.112499  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.112526  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.112541  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.112572  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.112585  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.112597  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.113071  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.113084  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.113115  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.113133  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.113087  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.113349  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.164813  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.164835  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.165137  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.165171  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.166765  133802 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:37:07.167996  133802 addons.go:502] enable addons completed in 1.150489286s: enabled=[storage-provisioner default-storageclass]
	I1212 23:37:07.168052  133802 start.go:233] waiting for cluster config update ...
	I1212 23:37:07.168068  133802 start.go:242] writing updated cluster config ...
	I1212 23:37:07.168346  133802 ssh_runner.go:195] Run: rm -f paused
	I1212 23:37:07.239467  133802 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 23:37:07.241846  133802 out.go:177] * Done! kubectl is now configured to use "newest-cni-439645" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:17:15 UTC, ends at Tue 2023-12-12 23:38:44 UTC. --
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.161501174Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&PodSandboxMetadata{Name:busybox,Uid:2a7a232d-7be4-46ec-9442-550e77e1037a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423081429202492,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:17:53.417692880Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-nrpzf,Uid:bfe81238-05e0-4f68-8a23-d212eb2a24f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:170242
3081409531210,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:17:53.417664821Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4af5de0e3df01e688d2acedbe821d9b9b23e58ab25c65cf3f84b8970dbca2f9,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-zwzrg,Uid:8b0d823e-df34-45eb-807c-17d8a9178bb8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423077526351271,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-zwzrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b0d823e-df34-45eb-807c-17d8a9178bb8,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12
T23:17:53.417691229Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&PodSandboxMetadata{Name:kube-proxy-wjrjj,Uid:fa659f1c-88de-406d-8183-bcac6f529efc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423073798908799,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa659f1c-88de-406d-8183-bcac6f529efc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-12T23:17:53.417695971Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0570ec42-4a53-4688-ac93-ee10fc58313d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423073790640746,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-12-12T23:17:53.417733616Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-850839,Uid:f7c2b6fd5a437e6949a9892207f94280,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423066992830282,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c2b6fd5a437e6949a9892207f94280,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f7c2b6fd5a437e6949a9892207f94280,kubernetes.io/config.seen: 2023-12-12T23:17:46.402528616Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&PodSandboxMetadata{Name:kube-scheduler-defaul
t-k8s-diff-port-850839,Uid:30862793aa821efa1cb278f711cf3bca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423066973419366,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30862793aa821efa1cb278f711cf3bca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 30862793aa821efa1cb278f711cf3bca,kubernetes.io/config.seen: 2023-12-12T23:17:46.402529811Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-850839,Uid:13a108b8450f638b4168b3bbc0ad86a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423066962687498,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-def
ault-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13a108b8450f638b4168b3bbc0ad86a2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.180:8444,kubernetes.io/config.hash: 13a108b8450f638b4168b3bbc0ad86a2,kubernetes.io/config.seen: 2023-12-12T23:17:46.402527109Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-850839,Uid:ad5cc487748e024b1cc8f6e9d661904b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1702423066918379985,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.39.180:2379,kubernetes.io/config.hash: ad5cc487748e024b1cc8f6e9d661904b,kubernetes.io/config.seen: 2023-12-12T23:17:46.402521203Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=44ca7737-5ff8-4fdf-9cf8-f2d0684a9ed8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.162453822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d4b589da-f44d-4f44-9dfe-9b93d47e2d84 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.162561784Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d4b589da-f44d-4f44-9dfe-9b93d47e2d84 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.162755399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423105690011086,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d73c548272214ddd7ec2018b13818b4d9827389771a5caf012c51f7817ce2e9,PodSandboxId:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423083503044628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{io.kubernetes.container.hash: 917beb50,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954,PodSandboxId:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423082102496841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,},Annotations:map[string]string{io.kubernetes.container.hash: aba5f9a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088,PodSandboxId:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423074821162009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
fa659f1c-88de-406d-8183-bcac6f529efc,},Annotations:map[string]string{io.kubernetes.container.hash: a71fdecb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423074801770952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9,PodSandboxId:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423068133232023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 30862793aa821efa1cb278f711cf3bca,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9,PodSandboxId:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423067986906641,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,},An
notations:map[string]string{io.kubernetes.container.hash: 7d1f0931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee,PodSandboxId:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423067715211491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7c2b6fd5a437e6949a9892207f94280,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b,PodSandboxId:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423067579143370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
3a108b8450f638b4168b3bbc0ad86a2,},Annotations:map[string]string{io.kubernetes.container.hash: c9c10e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d4b589da-f44d-4f44-9dfe-9b93d47e2d84 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.176559187Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eeea949d-b3a9-469e-9c52-3fb14cb3d141 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.176639118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eeea949d-b3a9-469e-9c52-3fb14cb3d141 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.178064688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=984dbe74-eb51-4f31-9efa-a4796d20e457 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.178526340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424324178513356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=984dbe74-eb51-4f31-9efa-a4796d20e457 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.179445145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bebea71e-db44-43e3-bb11-40c254b758e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.179514992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bebea71e-db44-43e3-bb11-40c254b758e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.179700160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423105690011086,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d73c548272214ddd7ec2018b13818b4d9827389771a5caf012c51f7817ce2e9,PodSandboxId:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423083503044628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{io.kubernetes.container.hash: 917beb50,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954,PodSandboxId:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423082102496841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,},Annotations:map[string]string{io.kubernetes.container.hash: aba5f9a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088,PodSandboxId:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423074821162009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
fa659f1c-88de-406d-8183-bcac6f529efc,},Annotations:map[string]string{io.kubernetes.container.hash: a71fdecb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423074801770952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9,PodSandboxId:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423068133232023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 30862793aa821efa1cb278f711cf3bca,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9,PodSandboxId:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423067986906641,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,},An
notations:map[string]string{io.kubernetes.container.hash: 7d1f0931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee,PodSandboxId:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423067715211491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7c2b6fd5a437e6949a9892207f94280,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b,PodSandboxId:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423067579143370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
3a108b8450f638b4168b3bbc0ad86a2,},Annotations:map[string]string{io.kubernetes.container.hash: c9c10e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bebea71e-db44-43e3-bb11-40c254b758e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.217487357Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=13f5b750-fccd-4b0d-8b7e-642b3e7d3068 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.217574032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=13f5b750-fccd-4b0d-8b7e-642b3e7d3068 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.219120855Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=74d6d02f-96f7-4f6a-b549-fb15c278e4b4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.219605167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424324219589481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=74d6d02f-96f7-4f6a-b549-fb15c278e4b4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.220212841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b04369b-3269-43fc-b7a9-d350260565b6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.220345682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b04369b-3269-43fc-b7a9-d350260565b6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.220567122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423105690011086,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d73c548272214ddd7ec2018b13818b4d9827389771a5caf012c51f7817ce2e9,PodSandboxId:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423083503044628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{io.kubernetes.container.hash: 917beb50,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954,PodSandboxId:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423082102496841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,},Annotations:map[string]string{io.kubernetes.container.hash: aba5f9a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088,PodSandboxId:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423074821162009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
fa659f1c-88de-406d-8183-bcac6f529efc,},Annotations:map[string]string{io.kubernetes.container.hash: a71fdecb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423074801770952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9,PodSandboxId:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423068133232023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 30862793aa821efa1cb278f711cf3bca,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9,PodSandboxId:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423067986906641,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,},An
notations:map[string]string{io.kubernetes.container.hash: 7d1f0931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee,PodSandboxId:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423067715211491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7c2b6fd5a437e6949a9892207f94280,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b,PodSandboxId:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423067579143370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
3a108b8450f638b4168b3bbc0ad86a2,},Annotations:map[string]string{io.kubernetes.container.hash: c9c10e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b04369b-3269-43fc-b7a9-d350260565b6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.255128248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8e14111c-8a90-407a-8c26-cded009cf35c name=/runtime.v1.RuntimeService/Version
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.255230515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8e14111c-8a90-407a-8c26-cded009cf35c name=/runtime.v1.RuntimeService/Version
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.260778369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f038f28e-4988-494d-b561-d1b511cef6ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.261198880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424324261185278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f038f28e-4988-494d-b561-d1b511cef6ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.261647527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3b38e065-ab29-49fe-8600-c77c7807b76f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.261730646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3b38e065-ab29-49fe-8600-c77c7807b76f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:38:44 default-k8s-diff-port-850839 crio[721]: time="2023-12-12 23:38:44.261903214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423105690011086,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d73c548272214ddd7ec2018b13818b4d9827389771a5caf012c51f7817ce2e9,PodSandboxId:e647fae6370e9d9203cc19f52fbaa8f659a0c5e63a90747417ca9914c92be817,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1702423083503044628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a7a232d-7be4-46ec-9442-550e77e1037a,},Annotations:map[string]string{io.kubernetes.container.hash: 917beb50,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954,PodSandboxId:fef179b548667de78df02b6d85265cd691177c6bed522d3020c40a99e7f0b5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423082102496841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nrpzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfe81238-05e0-4f68-8a23-d212eb2a24f2,},Annotations:map[string]string{io.kubernetes.container.hash: aba5f9a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088,PodSandboxId:61efdcb8ef86749469adcfa92758832a2ea34f131f9efd71a23a956465aa176f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423074821162009,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjrjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
fa659f1c-88de-406d-8183-bcac6f529efc,},Annotations:map[string]string{io.kubernetes.container.hash: a71fdecb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988,PodSandboxId:234d377ea9583508cd8a782499f7fa85b8cff46556e57748d5b6ec90cb2630e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1702423074801770952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
570ec42-4a53-4688-ac93-ee10fc58313d,},Annotations:map[string]string{io.kubernetes.container.hash: 38e7694a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9,PodSandboxId:03cdf22f2cbcc785a689f1e88e83eb32298f9ce588183ca2e8247a3756023a61,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423068133232023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 30862793aa821efa1cb278f711cf3bca,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9,PodSandboxId:0ebe21b79c6eccb37c936475ee93107fb9d23e140d0f35694cbb9211499db3a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423067986906641,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad5cc487748e024b1cc8f6e9d661904b,},An
notations:map[string]string{io.kubernetes.container.hash: 7d1f0931,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee,PodSandboxId:3d84dcf330299130447468ee195d88d2e17ab17d72d83e23a92edfbfcff1cd36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423067715211491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7c2b6fd5a437e6949a9892207f94280,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b,PodSandboxId:b4f73d9a01ecd9c4cf0d5fb9328cd95361bf641c6cdcbe994f5ec18d2bcc1994,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423067579143370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
3a108b8450f638b4168b3bbc0ad86a2,},Annotations:map[string]string{io.kubernetes.container.hash: c9c10e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3b38e065-ab29-49fe-8600-c77c7807b76f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	61878856aa70b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   234d377ea9583       storage-provisioner
	2d73c54827221       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   e647fae6370e9       busybox
	79a5e815ba6ab       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   fef179b548667       coredns-5dd5756b68-nrpzf
	fb7f07b5f8eb1       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      20 minutes ago      Running             kube-proxy                1                   61efdcb8ef867       kube-proxy-wjrjj
	8f486cf9b4b55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   234d377ea9583       storage-provisioner
	d45aa46de2dd0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      20 minutes ago      Running             kube-scheduler            1                   03cdf22f2cbcc       kube-scheduler-default-k8s-diff-port-850839
	57f9f49cbae33       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      20 minutes ago      Running             etcd                      1                   0ebe21b79c6ec       etcd-default-k8s-diff-port-850839
	901c40ebab259       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      20 minutes ago      Running             kube-controller-manager   1                   3d84dcf330299       kube-controller-manager-default-k8s-diff-port-850839
	71fd536d9f31c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      20 minutes ago      Running             kube-apiserver            1                   b4f73d9a01ecd       kube-apiserver-default-k8s-diff-port-850839
	
	* 
	* ==> coredns [79a5e815ba6abce51f902bd429db04ac56341ded4977a3321e915aa3ba9a6954] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53658 - 15063 "HINFO IN 2628023027677409627.8915963882515406296. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009459699s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-850839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-850839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=default-k8s-diff-port-850839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_09_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:09:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-850839
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:38:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:33:41 +0000   Tue, 12 Dec 2023 23:09:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:33:41 +0000   Tue, 12 Dec 2023 23:09:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:33:41 +0000   Tue, 12 Dec 2023 23:09:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:33:41 +0000   Tue, 12 Dec 2023 23:18:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    default-k8s-diff-port-850839
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c053d507b8d94034a23e89010a2bb079
	  System UUID:                c053d507-b8d9-4034-a23e-89010a2bb079
	  Boot ID:                    43f38a7a-c052-4bea-9ff4-1379c57765e8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-nrpzf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-850839                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-850839             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-850839    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-wjrjj                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-850839             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-zwzrg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-850839 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-850839 event: Registered Node default-k8s-diff-port-850839 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-850839 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-850839 event: Registered Node default-k8s-diff-port-850839 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070886] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.504214] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.220427] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153315] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.628802] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.433457] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.120891] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.145475] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.131692] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.232611] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[ +17.607077] systemd-fstab-generator[919]: Ignoring "noauto" for root device
	[Dec12 23:18] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [57f9f49cbae33b406ad5b453ca678be1642066018d030c55734f2aaea55149e9] <==
	* {"level":"info","ts":"2023-12-12T23:17:59.498501Z","caller":"traceutil/trace.go:171","msg":"trace[60084717] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"143.918042ms","start":"2023-12-12T23:17:59.354566Z","end":"2023-12-12T23:17:59.498484Z","steps":["trace[60084717] 'process raft request'  (duration: 122.868384ms)","trace[60084717] 'compare'  (duration: 20.90077ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:17:59.821567Z","caller":"traceutil/trace.go:171","msg":"trace[2071210752] linearizableReadLoop","detail":"{readStateIndex:573; appliedIndex:572; }","duration":"257.087334ms","start":"2023-12-12T23:17:59.564447Z","end":"2023-12-12T23:17:59.821534Z","steps":["trace[2071210752] 'read index received'  (duration: 245.01688ms)","trace[2071210752] 'applied index is now lower than readState.Index'  (duration: 12.069643ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:17:59.821711Z","caller":"traceutil/trace.go:171","msg":"trace[747053069] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"318.69518ms","start":"2023-12-12T23:17:59.502859Z","end":"2023-12-12T23:17:59.821554Z","steps":["trace[747053069] 'process raft request'  (duration: 306.654301ms)","trace[747053069] 'compare'  (duration: 11.918931ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:17:59.821738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.286578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" ","response":"range_response_count:1 size:3866"}
	{"level":"info","ts":"2023-12-12T23:17:59.821811Z","caller":"traceutil/trace.go:171","msg":"trace[1581602737] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg; range_end:; response_count:1; response_revision:544; }","duration":"257.373545ms","start":"2023-12-12T23:17:59.564424Z","end":"2023-12-12T23:17:59.821797Z","steps":["trace[1581602737] 'agreement among raft nodes before linearized reading'  (duration: 257.229736ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:17:59.822063Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.395449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-850839\" ","response":"range_response_count:1 size:5714"}
	{"level":"info","ts":"2023-12-12T23:17:59.822133Z","caller":"traceutil/trace.go:171","msg":"trace[1936932577] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-850839; range_end:; response_count:1; response_revision:544; }","duration":"243.471074ms","start":"2023-12-12T23:17:59.578656Z","end":"2023-12-12T23:17:59.822127Z","steps":["trace[1936932577] 'agreement among raft nodes before linearized reading'  (duration: 243.373155ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:17:59.822308Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:17:59.502841Z","time spent":"318.9257ms","remote":"127.0.0.1:53236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-zwzrg.17a038c54faf94a0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-zwzrg.17a038c54faf94a0\" value_size:690 lease:3934048619837274232 >> failure:<>"}
	{"level":"warn","ts":"2023-12-12T23:18:00.216375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.871052ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13157420656692050422 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" mod_revision:455 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" value_size:4000 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-12T23:18:00.2168Z","caller":"traceutil/trace.go:171","msg":"trace[1389807667] linearizableReadLoop","detail":"{readStateIndex:576; appliedIndex:574; }","duration":"138.587558ms","start":"2023-12-12T23:18:00.078199Z","end":"2023-12-12T23:18:00.216787Z","steps":["trace[1389807667] 'read index received'  (duration: 6.903448ms)","trace[1389807667] 'applied index is now lower than readState.Index'  (duration: 131.683501ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:18:00.21715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.955726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-850839\" ","response":"range_response_count:1 size:5714"}
	{"level":"info","ts":"2023-12-12T23:18:00.217223Z","caller":"traceutil/trace.go:171","msg":"trace[1088867811] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-850839; range_end:; response_count:1; response_revision:547; }","duration":"139.036697ms","start":"2023-12-12T23:18:00.078175Z","end":"2023-12-12T23:18:00.217212Z","steps":["trace[1088867811] 'agreement among raft nodes before linearized reading'  (duration: 138.89149ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:18:00.217613Z","caller":"traceutil/trace.go:171","msg":"trace[337745353] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"388.978302ms","start":"2023-12-12T23:17:59.82862Z","end":"2023-12-12T23:18:00.217598Z","steps":["trace[337745353] 'process raft request'  (duration: 256.452084ms)","trace[337745353] 'compare'  (duration: 130.509168ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:18:00.217741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:17:59.828608Z","time spent":"389.071691ms","remote":"127.0.0.1:53260","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4066,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" mod_revision:455 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" value_size:4000 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-zwzrg\" > >"}
	{"level":"info","ts":"2023-12-12T23:18:00.217926Z","caller":"traceutil/trace.go:171","msg":"trace[455270054] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"330.942934ms","start":"2023-12-12T23:17:59.886972Z","end":"2023-12-12T23:18:00.217915Z","steps":["trace[455270054] 'process raft request'  (duration: 329.750265ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:18:00.218042Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T23:17:59.886949Z","time spent":"331.060478ms","remote":"127.0.0.1:53236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":789,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-zwzrg.17a038c562c44e15\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-zwzrg.17a038c562c44e15\" value_size:694 lease:3934048619837274232 >> failure:<>"}
	{"level":"info","ts":"2023-12-12T23:27:51.394939Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":818}
	{"level":"info","ts":"2023-12-12T23:27:51.398542Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":818,"took":"2.533699ms","hash":2145782772}
	{"level":"info","ts":"2023-12-12T23:27:51.398627Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2145782772,"revision":818,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:32:51.402139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1060}
	{"level":"info","ts":"2023-12-12T23:32:51.404963Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1060,"took":"2.33308ms","hash":2916306372}
	{"level":"info","ts":"2023-12-12T23:32:51.405043Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2916306372,"revision":1060,"compact-revision":818}
	{"level":"info","ts":"2023-12-12T23:37:51.411456Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1303}
	{"level":"info","ts":"2023-12-12T23:37:51.413404Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1303,"took":"1.584097ms","hash":2473043143}
	{"level":"info","ts":"2023-12-12T23:37:51.413452Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2473043143,"revision":1303,"compact-revision":1060}
	
	* 
	* ==> kernel <==
	*  23:38:44 up 21 min,  0 users,  load average: 0.02, 0.09, 0.08
	Linux default-k8s-diff-port-850839 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [71fd536d9f31cc2c9ef4c8db19cd8a143e26f96b1e2aa620e16541d1b239446b] <==
	* E1212 23:33:54.209731       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:33:54.211387       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:34:52.982366       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 23:35:52.981863       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:35:54.210087       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:35:54.210239       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:35:54.210390       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:35:54.212402       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:35:54.212530       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:35:54.212559       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:36:52.981680       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 23:37:52.982908       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:37:53.213963       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:37:53.214111       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:37:53.214689       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:37:54.214659       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:37:54.214808       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:37:54.214842       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:37:54.214679       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:37:54.214953       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:37:54.216942       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [901c40ebab259b0347915ed7699b3b1e5e4dfb0c38f0cb367c763f09d5e4bbee] <==
	* I1212 23:33:06.388541       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:33:35.851059       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:33:36.397837       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:34:05.856992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:34:06.405423       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:34:20.508669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="386.18µs"
	I1212 23:34:35.502620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="128.401µs"
	E1212 23:34:35.865034       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:34:36.413777       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:35:05.870893       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:35:06.424591       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:35:35.878714       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:35:36.435782       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:36:05.885461       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:36:06.445412       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:36:35.892643       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:36:36.456824       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:37:05.900381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:37:06.466933       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:37:35.908980       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:37:36.475524       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:38:05.914173       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:38:06.500001       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:38:35.923498       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:38:36.507895       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [fb7f07b5f8eb15fd52cb2a4123264cc467c49cfe5839afe4ebdf2d842be97088] <==
	* I1212 23:17:55.355741       1 server_others.go:69] "Using iptables proxy"
	I1212 23:17:55.371348       1 node.go:141] Successfully retrieved node IP: 192.168.39.180
	I1212 23:17:55.427677       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:17:55.427722       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:17:55.431906       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:17:55.431970       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:17:55.432206       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:17:55.432242       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:17:55.433723       1 config.go:188] "Starting service config controller"
	I1212 23:17:55.433733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:17:55.433747       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:17:55.433750       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:17:55.434324       1 config.go:315] "Starting node config controller"
	I1212 23:17:55.434332       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:17:55.534879       1 shared_informer.go:318] Caches are synced for node config
	I1212 23:17:55.534949       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:17:55.534996       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d45aa46de2dd0df88cce390769e61fc7c5afb99034e8369ec07ee78ba2001db9] <==
	* I1212 23:17:50.457858       1 serving.go:348] Generated self-signed cert in-memory
	W1212 23:17:53.096894       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 23:17:53.097020       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:17:53.097056       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 23:17:53.097086       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 23:17:53.203968       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 23:17:53.204035       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:17:53.209921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 23:17:53.210095       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 23:17:53.210126       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 23:17:53.210146       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 23:17:53.310586       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:17:15 UTC, ends at Tue 2023-12-12 23:38:44 UTC. --
	Dec 12 23:35:46 default-k8s-diff-port-850839 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:35:46 default-k8s-diff-port-850839 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:35:55 default-k8s-diff-port-850839 kubelet[925]: E1212 23:35:55.483510     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:36:06 default-k8s-diff-port-850839 kubelet[925]: E1212 23:36:06.483160     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:36:19 default-k8s-diff-port-850839 kubelet[925]: E1212 23:36:19.482983     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:36:32 default-k8s-diff-port-850839 kubelet[925]: E1212 23:36:32.483767     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:36:46 default-k8s-diff-port-850839 kubelet[925]: E1212 23:36:46.500869     925 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:36:46 default-k8s-diff-port-850839 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:36:46 default-k8s-diff-port-850839 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:36:46 default-k8s-diff-port-850839 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:36:47 default-k8s-diff-port-850839 kubelet[925]: E1212 23:36:47.482361     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:37:02 default-k8s-diff-port-850839 kubelet[925]: E1212 23:37:02.483430     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:37:15 default-k8s-diff-port-850839 kubelet[925]: E1212 23:37:15.482128     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:37:28 default-k8s-diff-port-850839 kubelet[925]: E1212 23:37:28.485147     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:37:42 default-k8s-diff-port-850839 kubelet[925]: E1212 23:37:42.484184     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:37:46 default-k8s-diff-port-850839 kubelet[925]: E1212 23:37:46.474825     925 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Dec 12 23:37:46 default-k8s-diff-port-850839 kubelet[925]: E1212 23:37:46.500693     925 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:37:46 default-k8s-diff-port-850839 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:37:46 default-k8s-diff-port-850839 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:37:46 default-k8s-diff-port-850839 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:37:53 default-k8s-diff-port-850839 kubelet[925]: E1212 23:37:53.483139     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:38:06 default-k8s-diff-port-850839 kubelet[925]: E1212 23:38:06.483411     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:38:17 default-k8s-diff-port-850839 kubelet[925]: E1212 23:38:17.482674     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:38:29 default-k8s-diff-port-850839 kubelet[925]: E1212 23:38:29.482713     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	Dec 12 23:38:40 default-k8s-diff-port-850839 kubelet[925]: E1212 23:38:40.483137     925 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zwzrg" podUID="8b0d823e-df34-45eb-807c-17d8a9178bb8"
	
	* 
	* ==> storage-provisioner [61878856aa70b1d09a5f8f82d98f9aaa22c9c24dad1599b688c91f345e2fa13c] <==
	* I1212 23:18:25.827142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:18:25.845716       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:18:25.845819       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:18:43.254598       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:18:43.255026       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850839_6fc1a817-14fc-4a2a-a8fd-e40030fa1c47!
	I1212 23:18:43.255180       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac91ee07-d761-40c2-b0b7-efbc653bb61d", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-850839_6fc1a817-14fc-4a2a-a8fd-e40030fa1c47 became leader
	I1212 23:18:43.355699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850839_6fc1a817-14fc-4a2a-a8fd-e40030fa1c47!
	
	* 
	* ==> storage-provisioner [8f486cf9b4b55f728ce2c6e9c3f8c4c0daba2f4615676ffe747c0ad0525f1988] <==
	* I1212 23:17:55.279918       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 23:18:25.284901       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-850839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-zwzrg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-850839 describe pod metrics-server-57f55c9bc5-zwzrg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850839 describe pod metrics-server-57f55c9bc5-zwzrg: exit status 1 (67.358311ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-zwzrg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-850839 describe pod metrics-server-57f55c9bc5-zwzrg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (442.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (311.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-115023 -n no-preload-115023
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-12 23:37:23.155500549 +0000 UTC m=+5682.966557990
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-115023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-115023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.897µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-115023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-115023 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-115023 logs -n 25: (1.254965742s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:08 UTC | 12 Dec 23 23:09 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-809120            | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-549640        | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-115023             | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850839  | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-809120                 | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-549640             | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-115023                  | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850839       | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:22 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-549640 image                           | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	| delete  | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	| start   | -p newest-cni-439645 --memory=2200 --alsologtostderr   | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:37 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-439645             | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-439645                                   | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:36:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:36:08.204541  133802 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:36:08.204725  133802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:36:08.204739  133802 out.go:309] Setting ErrFile to fd 2...
	I1212 23:36:08.204747  133802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:36:08.204988  133802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:36:08.205710  133802 out.go:303] Setting JSON to false
	I1212 23:36:08.206770  133802 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":15522,"bootTime":1702408646,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:36:08.206855  133802 start.go:138] virtualization: kvm guest
	I1212 23:36:08.209439  133802 out.go:177] * [newest-cni-439645] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:36:08.211502  133802 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:36:08.211521  133802 notify.go:220] Checking for updates...
	I1212 23:36:08.213376  133802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:36:08.215409  133802 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:36:08.216961  133802 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.218748  133802 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:36:08.220434  133802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:36:08.222483  133802 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:36:08.222602  133802 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:36:08.222741  133802 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:36:08.222896  133802 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:36:08.264663  133802 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 23:36:08.266329  133802 start.go:298] selected driver: kvm2
	I1212 23:36:08.266348  133802 start.go:902] validating driver "kvm2" against <nil>
	I1212 23:36:08.266361  133802 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:36:08.267078  133802 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:36:08.267184  133802 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:36:08.283689  133802 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:36:08.283747  133802 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1212 23:36:08.283771  133802 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 23:36:08.284034  133802 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 23:36:08.284128  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:08.284147  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:08.284162  133802 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 23:36:08.284179  133802 start_flags.go:323] config:
	{Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:36:08.284346  133802 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:36:08.287125  133802 out.go:177] * Starting control plane node newest-cni-439645 in cluster newest-cni-439645
	I1212 23:36:08.288784  133802 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:36:08.288841  133802 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 23:36:08.288851  133802 cache.go:56] Caching tarball of preloaded images
	I1212 23:36:08.288949  133802 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:36:08.288960  133802 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 23:36:08.289063  133802 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json ...
	I1212 23:36:08.289080  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json: {Name:mk0bbd2ffb05d360736a6f4129d836fbd45c7eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:08.289214  133802 start.go:365] acquiring machines lock for newest-cni-439645: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:36:08.289242  133802 start.go:369] acquired machines lock for "newest-cni-439645" in 15.176µs
	I1212 23:36:08.289256  133802 start.go:93] Provisioning new machine with config: &{Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:36:08.289315  133802 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 23:36:08.291357  133802 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:36:08.291571  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:36:08.291628  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:36:08.306775  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43309
	I1212 23:36:08.307294  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:36:08.307903  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:36:08.307928  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:36:08.308314  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:36:08.308532  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:08.308701  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:08.308899  133802 start.go:159] libmachine.API.Create for "newest-cni-439645" (driver="kvm2")
	I1212 23:36:08.308956  133802 client.go:168] LocalClient.Create starting
	I1212 23:36:08.308999  133802 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem
	I1212 23:36:08.309054  133802 main.go:141] libmachine: Decoding PEM data...
	I1212 23:36:08.309078  133802 main.go:141] libmachine: Parsing certificate...
	I1212 23:36:08.309151  133802 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem
	I1212 23:36:08.309179  133802 main.go:141] libmachine: Decoding PEM data...
	I1212 23:36:08.309202  133802 main.go:141] libmachine: Parsing certificate...
	I1212 23:36:08.309228  133802 main.go:141] libmachine: Running pre-create checks...
	I1212 23:36:08.309247  133802 main.go:141] libmachine: (newest-cni-439645) Calling .PreCreateCheck
	I1212 23:36:08.309626  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:08.310057  133802 main.go:141] libmachine: Creating machine...
	I1212 23:36:08.310077  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Create
	I1212 23:36:08.310248  133802 main.go:141] libmachine: (newest-cni-439645) Creating KVM machine...
	I1212 23:36:08.311800  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found existing default KVM network
	I1212 23:36:08.313218  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.313045  133824 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:15:cd} reservation:<nil>}
	I1212 23:36:08.314157  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.314049  133824 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2c:8d:63} reservation:<nil>}
	I1212 23:36:08.315202  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.315115  133824 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027f0b0}
	I1212 23:36:08.321629  133802 main.go:141] libmachine: (newest-cni-439645) DBG | trying to create private KVM network mk-newest-cni-439645 192.168.61.0/24...
	I1212 23:36:08.412322  133802 main.go:141] libmachine: (newest-cni-439645) DBG | private KVM network mk-newest-cni-439645 192.168.61.0/24 created
	I1212 23:36:08.412362  133802 main.go:141] libmachine: (newest-cni-439645) Setting up store path in /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 ...
	I1212 23:36:08.412377  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.412288  133824 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.412449  133802 main.go:141] libmachine: (newest-cni-439645) Building disk image from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 23:36:08.412478  133802 main.go:141] libmachine: (newest-cni-439645) Downloading /home/jenkins/minikube-integration/17761-76611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:36:08.659216  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.659046  133824 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa...
	I1212 23:36:08.751801  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.751633  133824 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/newest-cni-439645.rawdisk...
	I1212 23:36:08.751839  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Writing magic tar header
	I1212 23:36:08.751858  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Writing SSH key tar header
	I1212 23:36:08.751867  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.751794  133824 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 ...
	I1212 23:36:08.751953  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645
	I1212 23:36:08.751980  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines
	I1212 23:36:08.751993  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.752011  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 (perms=drwx------)
	I1212 23:36:08.752026  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611
	I1212 23:36:08.752042  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 23:36:08.752056  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins
	I1212 23:36:08.752072  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines (perms=drwxr-xr-x)
	I1212 23:36:08.752091  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home
	I1212 23:36:08.752100  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube (perms=drwxr-xr-x)
	I1212 23:36:08.752106  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Skipping /home - not owner
	I1212 23:36:08.752122  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611 (perms=drwxrwxr-x)
	I1212 23:36:08.752135  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 23:36:08.752151  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 23:36:08.752174  133802 main.go:141] libmachine: (newest-cni-439645) Creating domain...
	I1212 23:36:08.753454  133802 main.go:141] libmachine: (newest-cni-439645) define libvirt domain using xml: 
	I1212 23:36:08.753486  133802 main.go:141] libmachine: (newest-cni-439645) <domain type='kvm'>
	I1212 23:36:08.753499  133802 main.go:141] libmachine: (newest-cni-439645)   <name>newest-cni-439645</name>
	I1212 23:36:08.753515  133802 main.go:141] libmachine: (newest-cni-439645)   <memory unit='MiB'>2200</memory>
	I1212 23:36:08.753526  133802 main.go:141] libmachine: (newest-cni-439645)   <vcpu>2</vcpu>
	I1212 23:36:08.753537  133802 main.go:141] libmachine: (newest-cni-439645)   <features>
	I1212 23:36:08.753546  133802 main.go:141] libmachine: (newest-cni-439645)     <acpi/>
	I1212 23:36:08.753562  133802 main.go:141] libmachine: (newest-cni-439645)     <apic/>
	I1212 23:36:08.753575  133802 main.go:141] libmachine: (newest-cni-439645)     <pae/>
	I1212 23:36:08.753585  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.753591  133802 main.go:141] libmachine: (newest-cni-439645)   </features>
	I1212 23:36:08.753599  133802 main.go:141] libmachine: (newest-cni-439645)   <cpu mode='host-passthrough'>
	I1212 23:36:08.753605  133802 main.go:141] libmachine: (newest-cni-439645)   
	I1212 23:36:08.753616  133802 main.go:141] libmachine: (newest-cni-439645)   </cpu>
	I1212 23:36:08.753624  133802 main.go:141] libmachine: (newest-cni-439645)   <os>
	I1212 23:36:08.753632  133802 main.go:141] libmachine: (newest-cni-439645)     <type>hvm</type>
	I1212 23:36:08.753645  133802 main.go:141] libmachine: (newest-cni-439645)     <boot dev='cdrom'/>
	I1212 23:36:08.753656  133802 main.go:141] libmachine: (newest-cni-439645)     <boot dev='hd'/>
	I1212 23:36:08.753684  133802 main.go:141] libmachine: (newest-cni-439645)     <bootmenu enable='no'/>
	I1212 23:36:08.753714  133802 main.go:141] libmachine: (newest-cni-439645)   </os>
	I1212 23:36:08.753743  133802 main.go:141] libmachine: (newest-cni-439645)   <devices>
	I1212 23:36:08.753761  133802 main.go:141] libmachine: (newest-cni-439645)     <disk type='file' device='cdrom'>
	I1212 23:36:08.753777  133802 main.go:141] libmachine: (newest-cni-439645)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/boot2docker.iso'/>
	I1212 23:36:08.753786  133802 main.go:141] libmachine: (newest-cni-439645)       <target dev='hdc' bus='scsi'/>
	I1212 23:36:08.753793  133802 main.go:141] libmachine: (newest-cni-439645)       <readonly/>
	I1212 23:36:08.753804  133802 main.go:141] libmachine: (newest-cni-439645)     </disk>
	I1212 23:36:08.753813  133802 main.go:141] libmachine: (newest-cni-439645)     <disk type='file' device='disk'>
	I1212 23:36:08.753820  133802 main.go:141] libmachine: (newest-cni-439645)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 23:36:08.753831  133802 main.go:141] libmachine: (newest-cni-439645)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/newest-cni-439645.rawdisk'/>
	I1212 23:36:08.753837  133802 main.go:141] libmachine: (newest-cni-439645)       <target dev='hda' bus='virtio'/>
	I1212 23:36:08.753845  133802 main.go:141] libmachine: (newest-cni-439645)     </disk>
	I1212 23:36:08.753853  133802 main.go:141] libmachine: (newest-cni-439645)     <interface type='network'>
	I1212 23:36:08.753868  133802 main.go:141] libmachine: (newest-cni-439645)       <source network='mk-newest-cni-439645'/>
	I1212 23:36:08.753876  133802 main.go:141] libmachine: (newest-cni-439645)       <model type='virtio'/>
	I1212 23:36:08.753882  133802 main.go:141] libmachine: (newest-cni-439645)     </interface>
	I1212 23:36:08.753890  133802 main.go:141] libmachine: (newest-cni-439645)     <interface type='network'>
	I1212 23:36:08.753896  133802 main.go:141] libmachine: (newest-cni-439645)       <source network='default'/>
	I1212 23:36:08.753904  133802 main.go:141] libmachine: (newest-cni-439645)       <model type='virtio'/>
	I1212 23:36:08.753910  133802 main.go:141] libmachine: (newest-cni-439645)     </interface>
	I1212 23:36:08.753920  133802 main.go:141] libmachine: (newest-cni-439645)     <serial type='pty'>
	I1212 23:36:08.753927  133802 main.go:141] libmachine: (newest-cni-439645)       <target port='0'/>
	I1212 23:36:08.753932  133802 main.go:141] libmachine: (newest-cni-439645)     </serial>
	I1212 23:36:08.753977  133802 main.go:141] libmachine: (newest-cni-439645)     <console type='pty'>
	I1212 23:36:08.754007  133802 main.go:141] libmachine: (newest-cni-439645)       <target type='serial' port='0'/>
	I1212 23:36:08.754026  133802 main.go:141] libmachine: (newest-cni-439645)     </console>
	I1212 23:36:08.754043  133802 main.go:141] libmachine: (newest-cni-439645)     <rng model='virtio'>
	I1212 23:36:08.754070  133802 main.go:141] libmachine: (newest-cni-439645)       <backend model='random'>/dev/random</backend>
	I1212 23:36:08.754082  133802 main.go:141] libmachine: (newest-cni-439645)     </rng>
	I1212 23:36:08.754095  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.754112  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.754132  133802 main.go:141] libmachine: (newest-cni-439645)   </devices>
	I1212 23:36:08.754150  133802 main.go:141] libmachine: (newest-cni-439645) </domain>
	I1212 23:36:08.754167  133802 main.go:141] libmachine: (newest-cni-439645) 
	I1212 23:36:08.759409  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:46:23:5d in network default
	I1212 23:36:08.760150  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring networks are active...
	I1212 23:36:08.760186  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:08.760936  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring network default is active
	I1212 23:36:08.761269  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring network mk-newest-cni-439645 is active
	I1212 23:36:08.761910  133802 main.go:141] libmachine: (newest-cni-439645) Getting domain xml...
	I1212 23:36:08.762809  133802 main.go:141] libmachine: (newest-cni-439645) Creating domain...
	I1212 23:36:10.109571  133802 main.go:141] libmachine: (newest-cni-439645) Waiting to get IP...
	I1212 23:36:10.110345  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.110871  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.110904  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.110828  133824 retry.go:31] will retry after 212.086514ms: waiting for machine to come up
	I1212 23:36:10.325657  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.326256  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.326288  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.326191  133824 retry.go:31] will retry after 381.394576ms: waiting for machine to come up
	I1212 23:36:10.708787  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.709308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.709338  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.709263  133824 retry.go:31] will retry after 454.077778ms: waiting for machine to come up
	I1212 23:36:11.164751  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:11.165360  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:11.165396  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:11.165317  133824 retry.go:31] will retry after 398.894065ms: waiting for machine to come up
	I1212 23:36:11.565921  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:11.566445  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:11.566480  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:11.566380  133824 retry.go:31] will retry after 617.446132ms: waiting for machine to come up
	I1212 23:36:12.185273  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:12.185806  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:12.185841  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:12.185709  133824 retry.go:31] will retry after 850.635578ms: waiting for machine to come up
	I1212 23:36:13.037840  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:13.038356  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:13.038389  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:13.038282  133824 retry.go:31] will retry after 1.002335455s: waiting for machine to come up
	I1212 23:36:14.042954  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:14.043504  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:14.043545  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:14.043463  133824 retry.go:31] will retry after 1.341938926s: waiting for machine to come up
	I1212 23:36:15.387072  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:15.387591  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:15.387635  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:15.387529  133824 retry.go:31] will retry after 1.597064845s: waiting for machine to come up
	I1212 23:36:16.986295  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:16.986840  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:16.986871  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:16.986765  133824 retry.go:31] will retry after 1.571135704s: waiting for machine to come up
	I1212 23:36:18.559590  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:18.560165  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:18.560212  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:18.560084  133824 retry.go:31] will retry after 2.078148594s: waiting for machine to come up
	I1212 23:36:20.641150  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:20.641588  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:20.641620  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:20.641527  133824 retry.go:31] will retry after 3.259272182s: waiting for machine to come up
	I1212 23:36:23.902961  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:23.903396  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:23.903419  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:23.903368  133824 retry.go:31] will retry after 4.378786206s: waiting for machine to come up
	I1212 23:36:28.286837  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:28.287251  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:28.287284  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:28.287188  133824 retry.go:31] will retry after 3.993578265s: waiting for machine to come up
	I1212 23:36:32.284308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.284709  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has current primary IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.284739  133802 main.go:141] libmachine: (newest-cni-439645) Found IP for machine: 192.168.61.126
	I1212 23:36:32.284753  133802 main.go:141] libmachine: (newest-cni-439645) Reserving static IP address...
	I1212 23:36:32.285063  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find host DHCP lease matching {name: "newest-cni-439645", mac: "52:54:00:99:10:d4", ip: "192.168.61.126"} in network mk-newest-cni-439645
	I1212 23:36:32.365787  133802 main.go:141] libmachine: (newest-cni-439645) Reserved static IP address: 192.168.61.126
	I1212 23:36:32.365863  133802 main.go:141] libmachine: (newest-cni-439645) Waiting for SSH to be available...
	I1212 23:36:32.365878  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Getting to WaitForSSH function...
	I1212 23:36:32.368389  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.368825  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.368856  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.368999  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using SSH client type: external
	I1212 23:36:32.369031  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa (-rw-------)
	I1212 23:36:32.369079  133802 main.go:141] libmachine: (newest-cni-439645) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:36:32.369095  133802 main.go:141] libmachine: (newest-cni-439645) DBG | About to run SSH command:
	I1212 23:36:32.369110  133802 main.go:141] libmachine: (newest-cni-439645) DBG | exit 0
	I1212 23:36:32.463168  133802 main.go:141] libmachine: (newest-cni-439645) DBG | SSH cmd err, output: <nil>: 
	I1212 23:36:32.463437  133802 main.go:141] libmachine: (newest-cni-439645) KVM machine creation complete!
	I1212 23:36:32.463806  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:32.464520  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:32.464754  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:32.464947  133802 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 23:36:32.464967  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:36:32.466474  133802 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 23:36:32.466493  133802 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 23:36:32.466500  133802 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 23:36:32.466506  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.469172  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.469553  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.469586  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.469718  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.469925  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.470103  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.470247  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.470448  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.470816  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.470836  133802 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 23:36:32.594684  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:36:32.594730  133802 main.go:141] libmachine: Detecting the provisioner...
	I1212 23:36:32.594745  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.597756  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.598098  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.598124  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.598250  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.598474  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.598645  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.598802  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.599050  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.599474  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.599494  133802 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 23:36:32.724121  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g161fa11-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 23:36:32.724215  133802 main.go:141] libmachine: found compatible host: buildroot
	I1212 23:36:32.724226  133802 main.go:141] libmachine: Provisioning with buildroot...
	I1212 23:36:32.724236  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:32.724485  133802 buildroot.go:166] provisioning hostname "newest-cni-439645"
	I1212 23:36:32.724519  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:32.724731  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.727301  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.727695  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.727739  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.727904  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.728108  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.728272  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.728398  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.728575  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.728902  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.728919  133802 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-439645 && echo "newest-cni-439645" | sudo tee /etc/hostname
	I1212 23:36:32.869225  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-439645
	
	I1212 23:36:32.869262  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.872305  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.872650  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.872683  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.872833  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.873037  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.873268  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.873467  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.873669  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.873997  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.874022  133802 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-439645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-439645/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-439645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:36:33.016660  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:36:33.016698  133802 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:36:33.016737  133802 buildroot.go:174] setting up certificates
	I1212 23:36:33.016752  133802 provision.go:83] configureAuth start
	I1212 23:36:33.016772  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:33.017098  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.020073  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.020451  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.020482  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.020593  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.022775  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.023111  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.023146  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.023260  133802 provision.go:138] copyHostCerts
	I1212 23:36:33.023320  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:36:33.023355  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:36:33.023426  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:36:33.023580  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:36:33.023595  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:36:33.023662  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:36:33.023751  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:36:33.023763  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:36:33.023794  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:36:33.023890  133802 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.newest-cni-439645 san=[192.168.61.126 192.168.61.126 localhost 127.0.0.1 minikube newest-cni-439645]
	I1212 23:36:33.130713  133802 provision.go:172] copyRemoteCerts
	I1212 23:36:33.130786  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:36:33.130811  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.133674  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.134044  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.134077  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.134252  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.134463  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.134630  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.134791  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.229111  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:36:33.253806  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 23:36:33.279194  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:36:33.305549  133802 provision.go:86] duration metric: configureAuth took 288.773724ms
	I1212 23:36:33.305584  133802 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:36:33.305828  133802 config.go:182] Loaded profile config "newest-cni-439645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:36:33.305928  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.309007  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.309393  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.309442  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.309685  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.309905  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.310082  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.310269  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.310522  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:33.310969  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:33.311001  133802 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:36:33.659022  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:36:33.659053  133802 main.go:141] libmachine: Checking connection to Docker...
	I1212 23:36:33.659062  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetURL
	I1212 23:36:33.660336  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using libvirt version 6000000
	I1212 23:36:33.662825  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.663254  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.663328  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.663521  133802 main.go:141] libmachine: Docker is up and running!
	I1212 23:36:33.663541  133802 main.go:141] libmachine: Reticulating splines...
	I1212 23:36:33.663551  133802 client.go:171] LocalClient.Create took 25.354580567s
	I1212 23:36:33.663576  133802 start.go:167] duration metric: libmachine.API.Create for "newest-cni-439645" took 25.354681666s
	I1212 23:36:33.663587  133802 start.go:300] post-start starting for "newest-cni-439645" (driver="kvm2")
	I1212 23:36:33.663598  133802 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:36:33.663621  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.663956  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:36:33.663990  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.666473  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.666820  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.666853  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.667024  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.667278  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.667455  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.667634  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.761451  133802 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:36:33.766167  133802 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:36:33.766204  133802 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:36:33.766276  133802 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:36:33.766346  133802 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:36:33.766431  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:36:33.775657  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:36:33.801999  133802 start.go:303] post-start completed in 138.398519ms
	I1212 23:36:33.802063  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:33.802819  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.806048  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.806506  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.806541  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.806879  133802 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json ...
	I1212 23:36:33.807132  133802 start.go:128] duration metric: createHost completed in 25.517805954s
	I1212 23:36:33.807166  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.810015  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.810489  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.810523  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.810700  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.810949  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.811121  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.811266  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.811478  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:33.811830  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:33.811843  133802 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:36:33.940600  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702424193.918938184
	
	I1212 23:36:33.940623  133802 fix.go:206] guest clock: 1702424193.918938184
	I1212 23:36:33.940630  133802 fix.go:219] Guest: 2023-12-12 23:36:33.918938184 +0000 UTC Remote: 2023-12-12 23:36:33.807148127 +0000 UTC m=+25.658409212 (delta=111.790057ms)
	I1212 23:36:33.940685  133802 fix.go:190] guest clock delta is within tolerance: 111.790057ms
	I1212 23:36:33.940696  133802 start.go:83] releasing machines lock for "newest-cni-439645", held for 25.651447824s
	I1212 23:36:33.940720  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.941043  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.944022  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.944345  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.944380  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.944480  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945025  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945203  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945298  133802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:36:33.945360  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.945440  133802 ssh_runner.go:195] Run: cat /version.json
	I1212 23:36:33.945462  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.948277  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948626  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.948657  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948688  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.948706  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948786  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.948902  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.949007  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.949229  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.949244  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.949425  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.949501  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.949585  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:34.042106  133802 ssh_runner.go:195] Run: systemctl --version
	I1212 23:36:34.067295  133802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:36:34.232321  133802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:36:34.239110  133802 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:36:34.239193  133802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:36:34.255820  133802 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:36:34.255846  133802 start.go:475] detecting cgroup driver to use...
	I1212 23:36:34.255922  133802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:36:34.270214  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:36:34.282323  133802 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:36:34.282395  133802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:36:34.295221  133802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:36:34.307456  133802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:36:34.424362  133802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:36:34.550588  133802 docker.go:219] disabling docker service ...
	I1212 23:36:34.550666  133802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:36:34.564243  133802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:36:34.576734  133802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:36:34.685970  133802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:36:34.806980  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:36:34.822300  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:36:34.841027  133802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:36:34.841096  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.850879  133802 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:36:34.850961  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.860041  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.869086  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.879476  133802 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:36:34.889070  133802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:36:34.897768  133802 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:36:34.897820  133802 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:36:34.911555  133802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:36:34.920330  133802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:36:35.030591  133802 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:36:35.201172  133802 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:36:35.201259  133802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:36:35.207456  133802 start.go:543] Will wait 60s for crictl version
	I1212 23:36:35.207528  133802 ssh_runner.go:195] Run: which crictl
	I1212 23:36:35.211996  133802 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:36:35.258165  133802 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:36:35.258298  133802 ssh_runner.go:195] Run: crio --version
	I1212 23:36:35.308715  133802 ssh_runner.go:195] Run: crio --version
	I1212 23:36:35.361717  133802 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 23:36:35.363160  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:35.365887  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:35.366260  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:35.366297  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:35.366516  133802 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 23:36:35.370757  133802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:36:35.385019  133802 localpath.go:92] copying /home/jenkins/minikube-integration/17761-76611/.minikube/client.crt -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.crt
	I1212 23:36:35.385199  133802 localpath.go:117] copying /home/jenkins/minikube-integration/17761-76611/.minikube/client.key -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.key
	I1212 23:36:35.387269  133802 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 23:36:35.388849  133802 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:36:35.388917  133802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:36:35.425861  133802 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 23:36:35.425931  133802 ssh_runner.go:195] Run: which lz4
	I1212 23:36:35.430186  133802 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:36:35.434663  133802 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:36:35.434700  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401739178 bytes)
	I1212 23:36:37.061349  133802 crio.go:444] Took 1.631183 seconds to copy over tarball
	I1212 23:36:37.061464  133802 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:36:39.637255  133802 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575759736s)
	I1212 23:36:39.637291  133802 crio.go:451] Took 2.575898 seconds to extract the tarball
	I1212 23:36:39.637303  133802 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:36:39.677494  133802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:36:39.766035  133802 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:36:39.766059  133802 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:36:39.766189  133802 ssh_runner.go:195] Run: crio config
	I1212 23:36:39.833582  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:39.833614  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:39.833641  133802 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1212 23:36:39.833669  133802 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-439645 NodeName:newest-cni-439645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:36:39.833860  133802 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-439645"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:36:39.833987  133802 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-439645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:36:39.834070  133802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 23:36:39.844985  133802 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:36:39.845069  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:36:39.856286  133802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1212 23:36:39.874343  133802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 23:36:39.892075  133802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1212 23:36:39.912183  133802 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I1212 23:36:39.916201  133802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:36:39.928092  133802 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645 for IP: 192.168.61.126
	I1212 23:36:39.928126  133802 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:39.928286  133802 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:36:39.928341  133802 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:36:39.928452  133802 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.key
	I1212 23:36:39.928484  133802 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a
	I1212 23:36:39.928502  133802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a with IP's: [192.168.61.126 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:36:40.086731  133802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a ...
	I1212 23:36:40.086762  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a: {Name:mk84eb32c33b5eeb3ae8582be9a9ef465e3ffdf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.086933  133802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a ...
	I1212 23:36:40.086947  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a: {Name:mk3956332b5f04ef30f2f27bb7fd660cd7454547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.087015  133802 certs.go:337] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt
	I1212 23:36:40.087074  133802 certs.go:341] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key
	I1212 23:36:40.087122  133802 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key
	I1212 23:36:40.087136  133802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt with IP's: []
	I1212 23:36:40.232772  133802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt ...
	I1212 23:36:40.232814  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt: {Name:mk67436a71f51ed921cc97aac7a15bc922b20637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.232974  133802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key ...
	I1212 23:36:40.232993  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key: {Name:mk50b06404e6bba8454342d2a726ff327c0cec64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.233148  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:36:40.233183  133802 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:36:40.233191  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:36:40.233214  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:36:40.233239  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:36:40.233260  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:36:40.233303  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:36:40.233973  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:36:40.258898  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:36:40.281796  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:36:40.306279  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:36:40.330350  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:36:40.354800  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:36:40.381255  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:36:40.406517  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:36:40.432781  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:36:40.457098  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:36:40.481498  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:36:40.505394  133802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:36:40.521926  133802 ssh_runner.go:195] Run: openssl version
	I1212 23:36:40.527951  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:36:40.539530  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.544979  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.545055  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.551367  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:36:40.562238  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:36:40.573393  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.578071  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.578118  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.583694  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:36:40.594194  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:36:40.605923  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.610736  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.610801  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.616424  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:36:40.627019  133802 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:36:40.631732  133802 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:36:40.631791  133802 kubeadm.go:404] StartCluster: {Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:36:40.631885  133802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:36:40.631927  133802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:36:40.674482  133802 cri.go:89] found id: ""
	I1212 23:36:40.674562  133802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:36:40.685198  133802 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:36:40.697550  133802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:36:40.709344  133802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:36:40.709392  133802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:36:40.826867  133802 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 23:36:40.826985  133802 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:36:41.099627  133802 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:36:41.099767  133802 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:36:41.099941  133802 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:36:41.379678  133802 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:36:41.472054  133802 out.go:204]   - Generating certificates and keys ...
	I1212 23:36:41.472187  133802 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:36:41.472277  133802 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:36:41.814537  133802 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:36:42.030569  133802 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:36:42.212440  133802 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:36:42.408531  133802 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:36:42.596187  133802 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:36:42.596349  133802 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-439645] and IPs [192.168.61.126 127.0.0.1 ::1]
	I1212 23:36:42.835041  133802 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:36:42.835264  133802 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-439645] and IPs [192.168.61.126 127.0.0.1 ::1]
	I1212 23:36:42.965385  133802 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:36:43.404498  133802 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:36:43.484084  133802 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:36:43.484419  133802 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:36:43.679972  133802 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:36:43.897206  133802 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 23:36:44.144262  133802 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:36:44.342356  133802 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:36:44.516812  133802 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:36:44.517253  133802 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:36:44.520476  133802 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:36:44.522075  133802 out.go:204]   - Booting up control plane ...
	I1212 23:36:44.522243  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:36:44.522359  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:36:44.522885  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:36:44.540187  133802 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:36:44.541123  133802 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:36:44.541258  133802 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:36:44.679611  133802 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:36:52.682546  133802 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005372 seconds
	I1212 23:36:52.699907  133802 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:36:52.716862  133802 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:36:53.253165  133802 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:36:53.253380  133802 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-439645 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:36:53.769781  133802 kubeadm.go:322] [bootstrap-token] Using token: v2icbq.kf108uw3b7rzt7qu
	I1212 23:36:53.771411  133802 out.go:204]   - Configuring RBAC rules ...
	I1212 23:36:53.771550  133802 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:36:53.785275  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:36:53.797359  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:36:53.803152  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:36:53.809783  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:36:53.822923  133802 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:36:53.839260  133802 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:36:54.108016  133802 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:36:54.206851  133802 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:36:54.207818  133802 kubeadm.go:322] 
	I1212 23:36:54.207927  133802 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:36:54.207951  133802 kubeadm.go:322] 
	I1212 23:36:54.208061  133802 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:36:54.208073  133802 kubeadm.go:322] 
	I1212 23:36:54.208106  133802 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:36:54.208190  133802 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:36:54.208263  133802 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:36:54.208272  133802 kubeadm.go:322] 
	I1212 23:36:54.208379  133802 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:36:54.208391  133802 kubeadm.go:322] 
	I1212 23:36:54.208444  133802 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:36:54.208465  133802 kubeadm.go:322] 
	I1212 23:36:54.208548  133802 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:36:54.208620  133802 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:36:54.208718  133802 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:36:54.208731  133802 kubeadm.go:322] 
	I1212 23:36:54.208872  133802 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:36:54.208985  133802 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:36:54.209004  133802 kubeadm.go:322] 
	I1212 23:36:54.209130  133802 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v2icbq.kf108uw3b7rzt7qu \
	I1212 23:36:54.209285  133802 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:36:54.209329  133802 kubeadm.go:322] 	--control-plane 
	I1212 23:36:54.209345  133802 kubeadm.go:322] 
	I1212 23:36:54.209421  133802 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:36:54.209437  133802 kubeadm.go:322] 
	I1212 23:36:54.209512  133802 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v2icbq.kf108uw3b7rzt7qu \
	I1212 23:36:54.209612  133802 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:36:54.210259  133802 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:36:54.210302  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:54.210325  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:54.212174  133802 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:36:54.213539  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:36:54.245016  133802 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:36:54.272864  133802 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:36:54.272931  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.272968  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=newest-cni-439645 minikube.k8s.io/updated_at=2023_12_12T23_36_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.379602  133802 ops.go:34] apiserver oom_adj: -16
	I1212 23:36:54.633228  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.739769  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:55.351504  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:55.851143  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:56.351206  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:56.851623  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:57.351305  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:57.851799  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:58.351559  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:58.850971  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:59.351009  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:59.851133  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:00.351899  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:00.851352  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:01.351666  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:01.851015  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:02.351018  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:02.850982  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:03.351233  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:03.851881  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:04.351793  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:04.851071  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:05.351199  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:05.851905  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:06.014064  133802 kubeadm.go:1088] duration metric: took 11.741191092s to wait for elevateKubeSystemPrivileges.
	I1212 23:37:06.014111  133802 kubeadm.go:406] StartCluster complete in 25.38232392s
	I1212 23:37:06.014140  133802 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:37:06.014240  133802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:37:06.016995  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:37:06.017348  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:37:06.017516  133802 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:37:06.017602  133802 config.go:182] Loaded profile config "newest-cni-439645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:37:06.017622  133802 addons.go:69] Setting default-storageclass=true in profile "newest-cni-439645"
	I1212 23:37:06.017648  133802 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-439645"
	I1212 23:37:06.017602  133802 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-439645"
	I1212 23:37:06.017663  133802 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-439645"
	I1212 23:37:06.017718  133802 host.go:66] Checking if "newest-cni-439645" exists ...
	I1212 23:37:06.018178  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.018192  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.018229  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.018346  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.035695  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I1212 23:37:06.035705  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41013
	I1212 23:37:06.036249  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.036338  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.036854  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.036875  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.037009  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.037027  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.037420  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.037460  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.037591  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.038135  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.038185  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.041698  133802 addons.go:231] Setting addon default-storageclass=true in "newest-cni-439645"
	I1212 23:37:06.041755  133802 host.go:66] Checking if "newest-cni-439645" exists ...
	I1212 23:37:06.042232  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.042290  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.054882  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I1212 23:37:06.055360  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.055977  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.056004  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.056361  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.056554  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.057908  133802 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-439645" context rescaled to 1 replicas
	I1212 23:37:06.057952  133802 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:37:06.061829  133802 out.go:177] * Verifying Kubernetes components...
	I1212 23:37:06.058630  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:37:06.063335  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34507
	I1212 23:37:06.063863  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:37:06.064075  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.065638  133802 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:37:06.064514  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.067314  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.067432  133802 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:37:06.067459  133802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:37:06.067484  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:37:06.067889  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.068492  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.068544  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.071038  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.071327  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:37:06.071427  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.071615  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:37:06.071835  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:37:06.071952  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:37:06.072334  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:37:06.086051  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 23:37:06.086557  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.087406  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.087422  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.087816  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.088051  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.089796  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:37:06.090067  133802 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:37:06.090086  133802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:37:06.090116  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:37:06.093365  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.093786  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:37:06.093828  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.093952  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:37:06.094153  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:37:06.094345  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:37:06.094511  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:37:06.207823  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:37:06.209786  133802 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:37:06.209852  133802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:37:06.224108  133802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:37:06.279819  133802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:37:06.792429  133802 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 23:37:06.792532  133802 api_server.go:72] duration metric: took 734.539895ms to wait for apiserver process to appear ...
	I1212 23:37:06.792561  133802 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:37:06.792582  133802 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I1212 23:37:06.801498  133802 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I1212 23:37:06.813707  133802 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:37:06.813743  133802 api_server.go:131] duration metric: took 21.176357ms to wait for apiserver health ...
	I1212 23:37:06.813754  133802 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:37:06.842793  133802 system_pods.go:59] 5 kube-system pods found
	I1212 23:37:06.842831  133802 system_pods.go:61] "etcd-newest-cni-439645" [7568458a-44a4-460a-8f19-50a0b12ce47e] Running
	I1212 23:37:06.842837  133802 system_pods.go:61] "kube-apiserver-newest-cni-439645" [be37172f-a2c6-43f0-ba6f-026b57424206] Running
	I1212 23:37:06.842841  133802 system_pods.go:61] "kube-controller-manager-newest-cni-439645" [949056cc-9959-4160-bf82-bc9e3afbd86f] Running
	I1212 23:37:06.842845  133802 system_pods.go:61] "kube-proxy-9jtg7" [3c4c2367-6254-4d81-83f0-054b4d33515b] Pending
	I1212 23:37:06.842849  133802 system_pods.go:61] "kube-scheduler-newest-cni-439645" [64a5920a-0055-457c-8f06-e81450e5d8af] Running
	I1212 23:37:06.842858  133802 system_pods.go:74] duration metric: took 29.095739ms to wait for pod list to return data ...
	I1212 23:37:06.842869  133802 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:37:06.860986  133802 default_sa.go:45] found service account: "default"
	I1212 23:37:06.861034  133802 default_sa.go:55] duration metric: took 18.151161ms for default service account to be created ...
	I1212 23:37:06.861049  133802 kubeadm.go:581] duration metric: took 803.062192ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1212 23:37:06.861071  133802 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:37:06.874325  133802 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:37:06.874362  133802 node_conditions.go:123] node cpu capacity is 2
	I1212 23:37:06.874378  133802 node_conditions.go:105] duration metric: took 13.301256ms to run NodePressure ...
	I1212 23:37:06.874393  133802 start.go:228] waiting for startup goroutines ...
	I1212 23:37:07.110459  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.110492  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.110568  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.110646  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.112440  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.112454  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.112476  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.112487  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.112499  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.112526  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.112541  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.112572  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.112585  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.112597  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.113071  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.113084  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.113115  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.113133  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.113087  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.113349  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.164813  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.164835  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.165137  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.165171  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.166765  133802 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:37:07.167996  133802 addons.go:502] enable addons completed in 1.150489286s: enabled=[storage-provisioner default-storageclass]
	I1212 23:37:07.168052  133802 start.go:233] waiting for cluster config update ...
	I1212 23:37:07.168068  133802 start.go:242] writing updated cluster config ...
	I1212 23:37:07.168346  133802 ssh_runner.go:195] Run: rm -f paused
	I1212 23:37:07.239467  133802 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 23:37:07.241846  133802 out.go:177] * Done! kubectl is now configured to use "newest-cni-439645" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:16:53 UTC, ends at Tue 2023-12-12 23:37:24 UTC. --
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.917659153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424243917628160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=f5551878-02ef-4672-9ea2-a34fcd02c463 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.922471500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d442a53c-950b-4802-b7e9-0d797c1d42f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.922590060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d442a53c-950b-4802-b7e9-0d797c1d42f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.922905636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,PodSandboxId:22a91e989de5c4022a7b7721bc3ab594fc6b43e2bf96b3b27edc318aea794cc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702423388123429548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1865df-d2a5-4ebe-be00-20aa7a752e65,},Annotations:map[string]string{io.kubernetes.container.hash: 53d728c6,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,PodSandboxId:0266489be870b25457febff54e2260a4c81168dd35146c8ded24853d0f2533fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702423387396138674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs95k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d936172-0411-4163-a62a-25a11d4ac2f4,},Annotations:map[string]string{io.kubernetes.container.hash: a1ebe0fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,PodSandboxId:c8703f6b4f020b34740b44655217ff262b8d74ab04ae39807b00d2d246486367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702423387280823674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9wxzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1b5bb4-619d-48a2-9c81-060018616240,},Annotations:map[string]string{io.kubernetes.container.hash: 79b5fb15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,PodSandboxId:13f74c4eeaf43037844e41d1da5bf148c2eab6880ca02a95bfcce8ab6f42421a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702423363380053432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
24e7d66089090d7e8a595d9f335e4709,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,PodSandboxId:d5f3d42af47d3379363e503eb36321c029263c6a6fe40ca5d79f74e2c25397f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702423362983025990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-115023,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 8ff2739708a59d44f5a39a50cec77f81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,PodSandboxId:019ea648a9b148bfda6208ff7a823739cf126da6d261222fd54af048fc39360a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702423362959623548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 7e1cbd99625f6216cc9339126276ebbf,},Annotations:map[string]string{io.kubernetes.container.hash: ca6bf390,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59,PodSandboxId:3bf80270d3dd5520966e67fdf017d2bc63b2f7c2a8716164d1e56c4005450151,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702423362653094206,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a45eb8b17b2b296077532d29757644,},A
nnotations:map[string]string{io.kubernetes.container.hash: fe815856,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d442a53c-950b-4802-b7e9-0d797c1d42f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.968667436Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=90de05d1-ba83-42ef-98d9-7c39e2ddf404 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.968770349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=90de05d1-ba83-42ef-98d9-7c39e2ddf404 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.970863203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e5da87bc-97b2-4e57-9444-3f321f22759d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.971409253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424243971390407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e5da87bc-97b2-4e57-9444-3f321f22759d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.972006529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=82eb5645-2d7f-4b26-9a1b-64888b57714b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.972093931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=82eb5645-2d7f-4b26-9a1b-64888b57714b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:23 no-preload-115023 crio[716]: time="2023-12-12 23:37:23.972511208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,PodSandboxId:22a91e989de5c4022a7b7721bc3ab594fc6b43e2bf96b3b27edc318aea794cc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702423388123429548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1865df-d2a5-4ebe-be00-20aa7a752e65,},Annotations:map[string]string{io.kubernetes.container.hash: 53d728c6,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,PodSandboxId:0266489be870b25457febff54e2260a4c81168dd35146c8ded24853d0f2533fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702423387396138674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs95k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d936172-0411-4163-a62a-25a11d4ac2f4,},Annotations:map[string]string{io.kubernetes.container.hash: a1ebe0fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,PodSandboxId:c8703f6b4f020b34740b44655217ff262b8d74ab04ae39807b00d2d246486367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702423387280823674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9wxzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1b5bb4-619d-48a2-9c81-060018616240,},Annotations:map[string]string{io.kubernetes.container.hash: 79b5fb15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,PodSandboxId:13f74c4eeaf43037844e41d1da5bf148c2eab6880ca02a95bfcce8ab6f42421a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702423363380053432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
24e7d66089090d7e8a595d9f335e4709,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,PodSandboxId:d5f3d42af47d3379363e503eb36321c029263c6a6fe40ca5d79f74e2c25397f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702423362983025990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-115023,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 8ff2739708a59d44f5a39a50cec77f81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,PodSandboxId:019ea648a9b148bfda6208ff7a823739cf126da6d261222fd54af048fc39360a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702423362959623548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 7e1cbd99625f6216cc9339126276ebbf,},Annotations:map[string]string{io.kubernetes.container.hash: ca6bf390,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59,PodSandboxId:3bf80270d3dd5520966e67fdf017d2bc63b2f7c2a8716164d1e56c4005450151,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702423362653094206,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a45eb8b17b2b296077532d29757644,},A
nnotations:map[string]string{io.kubernetes.container.hash: fe815856,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=82eb5645-2d7f-4b26-9a1b-64888b57714b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.022447874Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a48d1c08-7b82-41d6-9b63-e78b9217135f name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.022563954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a48d1c08-7b82-41d6-9b63-e78b9217135f name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.024449563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bce72052-27ca-4cd3-ad65-ab553c28abc2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.024840311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424244024824673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=bce72052-27ca-4cd3-ad65-ab553c28abc2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.025857832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d38dc736-eb83-4fc7-8844-ca38ba4bc31d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.025931889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d38dc736-eb83-4fc7-8844-ca38ba4bc31d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.026166855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,PodSandboxId:22a91e989de5c4022a7b7721bc3ab594fc6b43e2bf96b3b27edc318aea794cc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702423388123429548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1865df-d2a5-4ebe-be00-20aa7a752e65,},Annotations:map[string]string{io.kubernetes.container.hash: 53d728c6,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,PodSandboxId:0266489be870b25457febff54e2260a4c81168dd35146c8ded24853d0f2533fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702423387396138674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs95k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d936172-0411-4163-a62a-25a11d4ac2f4,},Annotations:map[string]string{io.kubernetes.container.hash: a1ebe0fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,PodSandboxId:c8703f6b4f020b34740b44655217ff262b8d74ab04ae39807b00d2d246486367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702423387280823674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9wxzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1b5bb4-619d-48a2-9c81-060018616240,},Annotations:map[string]string{io.kubernetes.container.hash: 79b5fb15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,PodSandboxId:13f74c4eeaf43037844e41d1da5bf148c2eab6880ca02a95bfcce8ab6f42421a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702423363380053432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
24e7d66089090d7e8a595d9f335e4709,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,PodSandboxId:d5f3d42af47d3379363e503eb36321c029263c6a6fe40ca5d79f74e2c25397f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702423362983025990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-115023,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 8ff2739708a59d44f5a39a50cec77f81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,PodSandboxId:019ea648a9b148bfda6208ff7a823739cf126da6d261222fd54af048fc39360a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702423362959623548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 7e1cbd99625f6216cc9339126276ebbf,},Annotations:map[string]string{io.kubernetes.container.hash: ca6bf390,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59,PodSandboxId:3bf80270d3dd5520966e67fdf017d2bc63b2f7c2a8716164d1e56c4005450151,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702423362653094206,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a45eb8b17b2b296077532d29757644,},A
nnotations:map[string]string{io.kubernetes.container.hash: fe815856,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d38dc736-eb83-4fc7-8844-ca38ba4bc31d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.072092376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=024dbb5a-1922-465b-9a88-199acb846639 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.072262955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=024dbb5a-1922-465b-9a88-199acb846639 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.074041102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5c97b99c-275b-47c4-898f-c857245813e7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.074613944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424244074592177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=5c97b99c-275b-47c4-898f-c857245813e7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.075430215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=34e6c803-e69b-424f-ab4b-12735f30c5c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.075495046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=34e6c803-e69b-424f-ab4b-12735f30c5c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:24 no-preload-115023 crio[716]: time="2023-12-12 23:37:24.075745692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a,PodSandboxId:22a91e989de5c4022a7b7721bc3ab594fc6b43e2bf96b3b27edc318aea794cc9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1702423388123429548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e1865df-d2a5-4ebe-be00-20aa7a752e65,},Annotations:map[string]string{io.kubernetes.container.hash: 53d728c6,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24,PodSandboxId:0266489be870b25457febff54e2260a4c81168dd35146c8ded24853d0f2533fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1702423387396138674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qs95k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d936172-0411-4163-a62a-25a11d4ac2f4,},Annotations:map[string]string{io.kubernetes.container.hash: a1ebe0fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5,PodSandboxId:c8703f6b4f020b34740b44655217ff262b8d74ab04ae39807b00d2d246486367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1702423387280823674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9wxzk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1b5bb4-619d-48a2-9c81-060018616240,},Annotations:map[string]string{io.kubernetes.container.hash: 79b5fb15,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c,PodSandboxId:13f74c4eeaf43037844e41d1da5bf148c2eab6880ca02a95bfcce8ab6f42421a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1702423363380053432,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
24e7d66089090d7e8a595d9f335e4709,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023,PodSandboxId:d5f3d42af47d3379363e503eb36321c029263c6a6fe40ca5d79f74e2c25397f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1702423362983025990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-115023,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 8ff2739708a59d44f5a39a50cec77f81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61,PodSandboxId:019ea648a9b148bfda6208ff7a823739cf126da6d261222fd54af048fc39360a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1702423362959623548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 7e1cbd99625f6216cc9339126276ebbf,},Annotations:map[string]string{io.kubernetes.container.hash: ca6bf390,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59,PodSandboxId:3bf80270d3dd5520966e67fdf017d2bc63b2f7c2a8716164d1e56c4005450151,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1702423362653094206,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-115023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a45eb8b17b2b296077532d29757644,},A
nnotations:map[string]string{io.kubernetes.container.hash: fe815856,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=34e6c803-e69b-424f-ab4b-12735f30c5c9 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf741185c5b48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   22a91e989de5c       storage-provisioner
	20f1fb49ef910       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   14 minutes ago      Running             kube-proxy                0                   0266489be870b       kube-proxy-qs95k
	590598e80e2c8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   c8703f6b4f020       coredns-76f75df574-9wxzk
	7d7d09efdc52f       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   13f74c4eeaf43       kube-scheduler-no-preload-115023
	32a84a4009e60       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   d5f3d42af47d3       kube-controller-manager-no-preload-115023
	47857508f38da       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   019ea648a9b14       kube-apiserver-no-preload-115023
	44d52798e1c78       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   3bf80270d3dd5       etcd-no-preload-115023
	
	* 
	* ==> coredns [590598e80e2c8f959717de68e1e8f193396ed68e5a6637df0fc012fe25a28ff5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-115023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-115023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=no-preload-115023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_22_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:22:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-115023
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:37:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:33:24 +0000   Tue, 12 Dec 2023 23:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:33:24 +0000   Tue, 12 Dec 2023 23:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:33:24 +0000   Tue, 12 Dec 2023 23:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:33:24 +0000   Tue, 12 Dec 2023 23:23:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.32
	  Hostname:    no-preload-115023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 75a7463f23fa499895a4e6f2db6821d6
	  System UUID:                75a7463f-23fa-4998-95a4-e6f2db6821d6
	  Boot ID:                    3fe4d199-2267-4d2a-912b-d0b05050570a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-9wxzk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-115023                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-115023             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-115023    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-qs95k                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-115023             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-wlql5              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-115023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-115023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-115023 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-115023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-115023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-115023 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m                kubelet          Node no-preload-115023 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m                kubelet          Node no-preload-115023 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-115023 event: Registered Node no-preload-115023 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076390] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.486206] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.514053] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154305] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.562486] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 23:17] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.139539] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.157836] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.126203] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.267238] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +30.563335] systemd-fstab-generator[1324]: Ignoring "noauto" for root device
	[ +20.436510] kauditd_printk_skb: 29 callbacks suppressed
	[Dec12 23:22] systemd-fstab-generator[3918]: Ignoring "noauto" for root device
	[  +9.814767] systemd-fstab-generator[4251]: Ignoring "noauto" for root device
	[Dec12 23:23] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [44d52798e1c78a3e3b71710cef363c1357bd3e7d448fa05f44e1340582046b59] <==
	* {"level":"info","ts":"2023-12-12T23:22:45.494676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 received MsgVoteResp from af722703d3b6d364 at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:45.494797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af722703d3b6d364 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:45.494908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: af722703d3b6d364 elected leader af722703d3b6d364 at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:45.499483Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"af722703d3b6d364","local-member-attributes":"{Name:no-preload-115023 ClientURLs:[https://192.168.72.32:2379]}","request-path":"/0/members/af722703d3b6d364/attributes","cluster-id":"69693fe7a610a475","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:22:45.500249Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:22:45.500632Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:45.50084Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:22:45.507283Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:22:45.507333Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:22:45.508891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.32:2379"}
	{"level":"info","ts":"2023-12-12T23:22:45.511737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:22:45.514506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"69693fe7a610a475","local-member-id":"af722703d3b6d364","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:45.514702Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:45.514781Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-12-12T23:23:05.606357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.973216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-115023\" ","response":"range_response_count:1 size:4649"}
	{"level":"info","ts":"2023-12-12T23:23:05.606468Z","caller":"traceutil/trace.go:171","msg":"trace[960256067] range","detail":"{range_begin:/registry/minions/no-preload-115023; range_end:; response_count:1; response_revision:380; }","duration":"152.289745ms","start":"2023-12-12T23:23:05.454158Z","end":"2023-12-12T23:23:05.606448Z","steps":["trace[960256067] 'range keys from in-memory index tree'  (duration: 123.661838ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:23:05.62993Z","caller":"traceutil/trace.go:171","msg":"trace[2116474601] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"114.986844ms","start":"2023-12-12T23:23:05.514818Z","end":"2023-12-12T23:23:05.629804Z","steps":["trace[2116474601] 'process raft request'  (duration: 32.276629ms)","trace[2116474601] 'compare'  (duration: 23.840773ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:32:45.958115Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":716}
	{"level":"info","ts":"2023-12-12T23:32:45.960848Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":716,"took":"2.311017ms","hash":2616507020}
	{"level":"info","ts":"2023-12-12T23:32:45.960941Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2616507020,"revision":716,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:36:39.382533Z","caller":"traceutil/trace.go:171","msg":"trace[474124593] transaction","detail":"{read_only:false; response_revision:1150; number_of_response:1; }","duration":"154.992107ms","start":"2023-12-12T23:36:39.227486Z","end":"2023-12-12T23:36:39.382478Z","steps":["trace[474124593] 'process raft request'  (duration: 154.471717ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T23:36:39.766797Z","caller":"traceutil/trace.go:171","msg":"trace[602377383] linearizableReadLoop","detail":"{readStateIndex:1332; appliedIndex:1331; }","duration":"112.66445ms","start":"2023-12-12T23:36:39.654113Z","end":"2023-12-12T23:36:39.766777Z","steps":["trace[602377383] 'read index received'  (duration: 48.625874ms)","trace[602377383] 'applied index is now lower than readState.Index'  (duration: 64.037315ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T23:36:39.767057Z","caller":"traceutil/trace.go:171","msg":"trace[445563188] transaction","detail":"{read_only:false; response_revision:1151; number_of_response:1; }","duration":"127.662688ms","start":"2023-12-12T23:36:39.63937Z","end":"2023-12-12T23:36:39.767033Z","steps":["trace[445563188] 'process raft request'  (duration: 63.417331ms)","trace[445563188] 'compare'  (duration: 63.674102ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T23:36:39.767077Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.895566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T23:36:39.767453Z","caller":"traceutil/trace.go:171","msg":"trace[1756470496] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1151; }","duration":"113.355583ms","start":"2023-12-12T23:36:39.654082Z","end":"2023-12-12T23:36:39.767438Z","steps":["trace[1756470496] 'agreement among raft nodes before linearized reading'  (duration: 112.800295ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:37:24 up 20 min,  0 users,  load average: 0.84, 0.42, 0.31
	Linux no-preload-115023 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [47857508f38da39fc19368326af5f4ebec9e16a906af04ac798924e5e5a31e61] <==
	* I1212 23:30:48.728570       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:32:47.725791       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:32:47.725926       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1212 23:32:48.726516       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:32:48.726628       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:32:48.726636       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:32:48.726542       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:32:48.726713       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:32:48.727968       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:33:48.727411       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:33:48.727779       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:33:48.727813       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:33:48.729056       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:33:48.729118       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:33:48.729154       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:35:48.728248       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:35:48.728739       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:35:48.728828       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:35:48.729312       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:35:48.729380       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:35:48.730545       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [32a84a4009e605dc7aba286af981cf1a4353c07ea40b51eabdc08dff31f6a023] <==
	* I1212 23:31:34.583376       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:32:04.066587       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:32:04.594818       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:32:34.073067       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:32:34.605895       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:33:04.079148       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:33:04.614981       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:33:34.085419       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:33:34.628821       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:34:04.093383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:34:04.639657       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:34:24.491262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="338.465µs"
	E1212 23:34:34.099485       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:34:34.649943       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:34:38.489109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="102.803µs"
	E1212 23:35:04.108504       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:35:04.663965       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:35:34.114481       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:35:34.683675       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:36:04.122553       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:36:04.694907       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:36:34.129094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:36:34.705727       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:37:04.135809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:37:04.715838       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [20f1fb49ef910382d1327b803afb4611338465781c714e1847202ed6c93a4e24] <==
	* I1212 23:23:08.038444       1 server_others.go:72] "Using iptables proxy"
	I1212 23:23:08.052347       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.32"]
	I1212 23:23:08.126097       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 23:23:08.126285       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:23:08.126305       1 server_others.go:168] "Using iptables Proxier"
	I1212 23:23:08.153110       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:23:08.156733       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I1212 23:23:08.156797       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:23:08.168298       1 config.go:188] "Starting service config controller"
	I1212 23:23:08.168335       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:23:08.168400       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:23:08.168407       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:23:08.179397       1 config.go:315] "Starting node config controller"
	I1212 23:23:08.182267       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:23:08.269814       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:23:08.269871       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:23:08.283058       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7d7d09efdc52fc21e489cfed5fbab7d1058bf8b46dc09aa7ae1d45ffff17092c] <==
	* W1212 23:22:47.769659       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:22:47.769674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 23:22:47.769765       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:22:47.769781       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:22:47.769844       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:22:47.769859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:22:47.769910       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:22:47.769919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:22:47.769927       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:22:47.769934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:22:48.750761       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:22:48.750884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:22:48.753487       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 23:22:48.753541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 23:22:48.833131       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:22:48.833244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:22:48.934841       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 23:22:48.934904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 23:22:48.939560       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:22:48.939612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 23:22:48.972396       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:22:48.972513       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:22:48.979542       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:22:48.979663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1212 23:22:51.735010       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:16:53 UTC, ends at Tue 2023-12-12 23:37:24 UTC. --
	Dec 12 23:34:38 no-preload-115023 kubelet[4258]: E1212 23:34:38.470178    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:34:51 no-preload-115023 kubelet[4258]: E1212 23:34:51.668086    4258 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:34:51 no-preload-115023 kubelet[4258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:34:51 no-preload-115023 kubelet[4258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:34:51 no-preload-115023 kubelet[4258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:34:53 no-preload-115023 kubelet[4258]: E1212 23:34:53.470439    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:35:06 no-preload-115023 kubelet[4258]: E1212 23:35:06.470462    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:35:19 no-preload-115023 kubelet[4258]: E1212 23:35:19.471428    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:35:34 no-preload-115023 kubelet[4258]: E1212 23:35:34.469320    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:35:46 no-preload-115023 kubelet[4258]: E1212 23:35:46.470056    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:35:51 no-preload-115023 kubelet[4258]: E1212 23:35:51.670071    4258 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:35:51 no-preload-115023 kubelet[4258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:35:51 no-preload-115023 kubelet[4258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:35:51 no-preload-115023 kubelet[4258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:35:58 no-preload-115023 kubelet[4258]: E1212 23:35:58.470310    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:36:13 no-preload-115023 kubelet[4258]: E1212 23:36:13.470111    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:36:24 no-preload-115023 kubelet[4258]: E1212 23:36:24.470359    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:36:38 no-preload-115023 kubelet[4258]: E1212 23:36:38.469959    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:36:50 no-preload-115023 kubelet[4258]: E1212 23:36:50.469988    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:36:51 no-preload-115023 kubelet[4258]: E1212 23:36:51.668913    4258 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:36:51 no-preload-115023 kubelet[4258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:36:51 no-preload-115023 kubelet[4258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:36:51 no-preload-115023 kubelet[4258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:37:01 no-preload-115023 kubelet[4258]: E1212 23:37:01.471962    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	Dec 12 23:37:12 no-preload-115023 kubelet[4258]: E1212 23:37:12.469535    4258 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wlql5" podUID="d9786845-dc0b-4120-be39-2ddde167b817"
	
	* 
	* ==> storage-provisioner [cf741185c5b48e817903a547041970aa1f60d64c3fe46a19542afc3908dccb8a] <==
	* I1212 23:23:08.315071       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:23:08.329948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:23:08.330026       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:23:08.341500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:23:08.341683       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-115023_72168d07-f591-43a6-a19b-99faa045a0e7!
	I1212 23:23:08.347066       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c3b8f1c-2c81-484e-a7d1-59b57e1a15e9", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-115023_72168d07-f591-43a6-a19b-99faa045a0e7 became leader
	I1212 23:23:08.442889       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-115023_72168d07-f591-43a6-a19b-99faa045a0e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-115023 -n no-preload-115023
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-115023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wlql5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-115023 describe pod metrics-server-57f55c9bc5-wlql5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-115023 describe pod metrics-server-57f55c9bc5-wlql5: exit status 1 (68.358108ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wlql5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-115023 describe pod metrics-server-57f55c9bc5-wlql5: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (311.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (327.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 23:33:13.361982   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:34:01.565495   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:34:17.803760   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:35:02.203091   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:35:25.171707   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 23:35:51.770919   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-809120 -n embed-certs-809120
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-12 23:37:46.90848926 +0000 UTC m=+5706.719546717
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-809120 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-809120 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.271µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-809120 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-809120 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-809120 logs -n 25: (1.274078927s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-809120            | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-549640        | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-115023             | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850839  | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC | 12 Dec 23 23:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:09 UTC |                     |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-809120                 | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-809120                                  | embed-certs-809120           | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-549640             | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:11 UTC | 12 Dec 23 23:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-115023                  | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850839       | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850839 | jenkins | v1.32.0 | 12 Dec 23 23:12 UTC | 12 Dec 23 23:22 UTC |
	|         | default-k8s-diff-port-850839                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-549640 image                           | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	| delete  | -p old-k8s-version-549640                              | old-k8s-version-549640       | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:36 UTC |
	| start   | -p newest-cni-439645 --memory=2200 --alsologtostderr   | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:36 UTC | 12 Dec 23 23:37 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-439645             | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-439645                                   | newest-cni-439645            | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-115023                                   | no-preload-115023            | jenkins | v1.32.0 | 12 Dec 23 23:37 UTC | 12 Dec 23 23:37 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 23:36:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 23:36:08.204541  133802 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:36:08.204725  133802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:36:08.204739  133802 out.go:309] Setting ErrFile to fd 2...
	I1212 23:36:08.204747  133802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:36:08.204988  133802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:36:08.205710  133802 out.go:303] Setting JSON to false
	I1212 23:36:08.206770  133802 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":15522,"bootTime":1702408646,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 23:36:08.206855  133802 start.go:138] virtualization: kvm guest
	I1212 23:36:08.209439  133802 out.go:177] * [newest-cni-439645] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 23:36:08.211502  133802 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 23:36:08.211521  133802 notify.go:220] Checking for updates...
	I1212 23:36:08.213376  133802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 23:36:08.215409  133802 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:36:08.216961  133802 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.218748  133802 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 23:36:08.220434  133802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 23:36:08.222483  133802 config.go:182] Loaded profile config "default-k8s-diff-port-850839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:36:08.222602  133802 config.go:182] Loaded profile config "embed-certs-809120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 23:36:08.222741  133802 config.go:182] Loaded profile config "no-preload-115023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:36:08.222896  133802 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 23:36:08.264663  133802 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 23:36:08.266329  133802 start.go:298] selected driver: kvm2
	I1212 23:36:08.266348  133802 start.go:902] validating driver "kvm2" against <nil>
	I1212 23:36:08.266361  133802 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 23:36:08.267078  133802 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:36:08.267184  133802 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 23:36:08.283689  133802 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 23:36:08.283747  133802 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1212 23:36:08.283771  133802 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 23:36:08.284034  133802 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 23:36:08.284128  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:08.284147  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:08.284162  133802 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 23:36:08.284179  133802 start_flags.go:323] config:
	{Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:36:08.284346  133802 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 23:36:08.287125  133802 out.go:177] * Starting control plane node newest-cni-439645 in cluster newest-cni-439645
	I1212 23:36:08.288784  133802 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:36:08.288841  133802 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 23:36:08.288851  133802 cache.go:56] Caching tarball of preloaded images
	I1212 23:36:08.288949  133802 preload.go:174] Found /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 23:36:08.288960  133802 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 23:36:08.289063  133802 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json ...
	I1212 23:36:08.289080  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json: {Name:mk0bbd2ffb05d360736a6f4129d836fbd45c7eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:08.289214  133802 start.go:365] acquiring machines lock for newest-cni-439645: {Name:mk555df2d735c9b28cd3d47b9383af9866178911 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 23:36:08.289242  133802 start.go:369] acquired machines lock for "newest-cni-439645" in 15.176µs
	I1212 23:36:08.289256  133802 start.go:93] Provisioning new machine with config: &{Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:36:08.289315  133802 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 23:36:08.291357  133802 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 23:36:08.291571  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:36:08.291628  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:36:08.306775  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43309
	I1212 23:36:08.307294  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:36:08.307903  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:36:08.307928  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:36:08.308314  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:36:08.308532  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:08.308701  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:08.308899  133802 start.go:159] libmachine.API.Create for "newest-cni-439645" (driver="kvm2")
	I1212 23:36:08.308956  133802 client.go:168] LocalClient.Create starting
	I1212 23:36:08.308999  133802 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem
	I1212 23:36:08.309054  133802 main.go:141] libmachine: Decoding PEM data...
	I1212 23:36:08.309078  133802 main.go:141] libmachine: Parsing certificate...
	I1212 23:36:08.309151  133802 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem
	I1212 23:36:08.309179  133802 main.go:141] libmachine: Decoding PEM data...
	I1212 23:36:08.309202  133802 main.go:141] libmachine: Parsing certificate...
	I1212 23:36:08.309228  133802 main.go:141] libmachine: Running pre-create checks...
	I1212 23:36:08.309247  133802 main.go:141] libmachine: (newest-cni-439645) Calling .PreCreateCheck
	I1212 23:36:08.309626  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:08.310057  133802 main.go:141] libmachine: Creating machine...
	I1212 23:36:08.310077  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Create
	I1212 23:36:08.310248  133802 main.go:141] libmachine: (newest-cni-439645) Creating KVM machine...
	I1212 23:36:08.311800  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found existing default KVM network
	I1212 23:36:08.313218  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.313045  133824 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:15:cd} reservation:<nil>}
	I1212 23:36:08.314157  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.314049  133824 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2c:8d:63} reservation:<nil>}
	I1212 23:36:08.315202  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.315115  133824 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027f0b0}
	I1212 23:36:08.321629  133802 main.go:141] libmachine: (newest-cni-439645) DBG | trying to create private KVM network mk-newest-cni-439645 192.168.61.0/24...
	I1212 23:36:08.412322  133802 main.go:141] libmachine: (newest-cni-439645) DBG | private KVM network mk-newest-cni-439645 192.168.61.0/24 created
	I1212 23:36:08.412362  133802 main.go:141] libmachine: (newest-cni-439645) Setting up store path in /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 ...
	I1212 23:36:08.412377  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.412288  133824 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.412449  133802 main.go:141] libmachine: (newest-cni-439645) Building disk image from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 23:36:08.412478  133802 main.go:141] libmachine: (newest-cni-439645) Downloading /home/jenkins/minikube-integration/17761-76611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso...
	I1212 23:36:08.659216  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.659046  133824 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa...
	I1212 23:36:08.751801  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.751633  133824 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/newest-cni-439645.rawdisk...
	I1212 23:36:08.751839  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Writing magic tar header
	I1212 23:36:08.751858  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Writing SSH key tar header
	I1212 23:36:08.751867  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:08.751794  133824 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 ...
	I1212 23:36:08.751953  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645
	I1212 23:36:08.751980  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube/machines
	I1212 23:36:08.751993  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 23:36:08.752011  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645 (perms=drwx------)
	I1212 23:36:08.752026  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17761-76611
	I1212 23:36:08.752042  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 23:36:08.752056  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home/jenkins
	I1212 23:36:08.752072  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube/machines (perms=drwxr-xr-x)
	I1212 23:36:08.752091  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Checking permissions on dir: /home
	I1212 23:36:08.752100  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611/.minikube (perms=drwxr-xr-x)
	I1212 23:36:08.752106  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Skipping /home - not owner
	I1212 23:36:08.752122  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration/17761-76611 (perms=drwxrwxr-x)
	I1212 23:36:08.752135  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 23:36:08.752151  133802 main.go:141] libmachine: (newest-cni-439645) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 23:36:08.752174  133802 main.go:141] libmachine: (newest-cni-439645) Creating domain...
	I1212 23:36:08.753454  133802 main.go:141] libmachine: (newest-cni-439645) define libvirt domain using xml: 
	I1212 23:36:08.753486  133802 main.go:141] libmachine: (newest-cni-439645) <domain type='kvm'>
	I1212 23:36:08.753499  133802 main.go:141] libmachine: (newest-cni-439645)   <name>newest-cni-439645</name>
	I1212 23:36:08.753515  133802 main.go:141] libmachine: (newest-cni-439645)   <memory unit='MiB'>2200</memory>
	I1212 23:36:08.753526  133802 main.go:141] libmachine: (newest-cni-439645)   <vcpu>2</vcpu>
	I1212 23:36:08.753537  133802 main.go:141] libmachine: (newest-cni-439645)   <features>
	I1212 23:36:08.753546  133802 main.go:141] libmachine: (newest-cni-439645)     <acpi/>
	I1212 23:36:08.753562  133802 main.go:141] libmachine: (newest-cni-439645)     <apic/>
	I1212 23:36:08.753575  133802 main.go:141] libmachine: (newest-cni-439645)     <pae/>
	I1212 23:36:08.753585  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.753591  133802 main.go:141] libmachine: (newest-cni-439645)   </features>
	I1212 23:36:08.753599  133802 main.go:141] libmachine: (newest-cni-439645)   <cpu mode='host-passthrough'>
	I1212 23:36:08.753605  133802 main.go:141] libmachine: (newest-cni-439645)   
	I1212 23:36:08.753616  133802 main.go:141] libmachine: (newest-cni-439645)   </cpu>
	I1212 23:36:08.753624  133802 main.go:141] libmachine: (newest-cni-439645)   <os>
	I1212 23:36:08.753632  133802 main.go:141] libmachine: (newest-cni-439645)     <type>hvm</type>
	I1212 23:36:08.753645  133802 main.go:141] libmachine: (newest-cni-439645)     <boot dev='cdrom'/>
	I1212 23:36:08.753656  133802 main.go:141] libmachine: (newest-cni-439645)     <boot dev='hd'/>
	I1212 23:36:08.753684  133802 main.go:141] libmachine: (newest-cni-439645)     <bootmenu enable='no'/>
	I1212 23:36:08.753714  133802 main.go:141] libmachine: (newest-cni-439645)   </os>
	I1212 23:36:08.753743  133802 main.go:141] libmachine: (newest-cni-439645)   <devices>
	I1212 23:36:08.753761  133802 main.go:141] libmachine: (newest-cni-439645)     <disk type='file' device='cdrom'>
	I1212 23:36:08.753777  133802 main.go:141] libmachine: (newest-cni-439645)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/boot2docker.iso'/>
	I1212 23:36:08.753786  133802 main.go:141] libmachine: (newest-cni-439645)       <target dev='hdc' bus='scsi'/>
	I1212 23:36:08.753793  133802 main.go:141] libmachine: (newest-cni-439645)       <readonly/>
	I1212 23:36:08.753804  133802 main.go:141] libmachine: (newest-cni-439645)     </disk>
	I1212 23:36:08.753813  133802 main.go:141] libmachine: (newest-cni-439645)     <disk type='file' device='disk'>
	I1212 23:36:08.753820  133802 main.go:141] libmachine: (newest-cni-439645)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 23:36:08.753831  133802 main.go:141] libmachine: (newest-cni-439645)       <source file='/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/newest-cni-439645.rawdisk'/>
	I1212 23:36:08.753837  133802 main.go:141] libmachine: (newest-cni-439645)       <target dev='hda' bus='virtio'/>
	I1212 23:36:08.753845  133802 main.go:141] libmachine: (newest-cni-439645)     </disk>
	I1212 23:36:08.753853  133802 main.go:141] libmachine: (newest-cni-439645)     <interface type='network'>
	I1212 23:36:08.753868  133802 main.go:141] libmachine: (newest-cni-439645)       <source network='mk-newest-cni-439645'/>
	I1212 23:36:08.753876  133802 main.go:141] libmachine: (newest-cni-439645)       <model type='virtio'/>
	I1212 23:36:08.753882  133802 main.go:141] libmachine: (newest-cni-439645)     </interface>
	I1212 23:36:08.753890  133802 main.go:141] libmachine: (newest-cni-439645)     <interface type='network'>
	I1212 23:36:08.753896  133802 main.go:141] libmachine: (newest-cni-439645)       <source network='default'/>
	I1212 23:36:08.753904  133802 main.go:141] libmachine: (newest-cni-439645)       <model type='virtio'/>
	I1212 23:36:08.753910  133802 main.go:141] libmachine: (newest-cni-439645)     </interface>
	I1212 23:36:08.753920  133802 main.go:141] libmachine: (newest-cni-439645)     <serial type='pty'>
	I1212 23:36:08.753927  133802 main.go:141] libmachine: (newest-cni-439645)       <target port='0'/>
	I1212 23:36:08.753932  133802 main.go:141] libmachine: (newest-cni-439645)     </serial>
	I1212 23:36:08.753977  133802 main.go:141] libmachine: (newest-cni-439645)     <console type='pty'>
	I1212 23:36:08.754007  133802 main.go:141] libmachine: (newest-cni-439645)       <target type='serial' port='0'/>
	I1212 23:36:08.754026  133802 main.go:141] libmachine: (newest-cni-439645)     </console>
	I1212 23:36:08.754043  133802 main.go:141] libmachine: (newest-cni-439645)     <rng model='virtio'>
	I1212 23:36:08.754070  133802 main.go:141] libmachine: (newest-cni-439645)       <backend model='random'>/dev/random</backend>
	I1212 23:36:08.754082  133802 main.go:141] libmachine: (newest-cni-439645)     </rng>
	I1212 23:36:08.754095  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.754112  133802 main.go:141] libmachine: (newest-cni-439645)     
	I1212 23:36:08.754132  133802 main.go:141] libmachine: (newest-cni-439645)   </devices>
	I1212 23:36:08.754150  133802 main.go:141] libmachine: (newest-cni-439645) </domain>
	I1212 23:36:08.754167  133802 main.go:141] libmachine: (newest-cni-439645) 
	I1212 23:36:08.759409  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:46:23:5d in network default
	I1212 23:36:08.760150  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring networks are active...
	I1212 23:36:08.760186  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:08.760936  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring network default is active
	I1212 23:36:08.761269  133802 main.go:141] libmachine: (newest-cni-439645) Ensuring network mk-newest-cni-439645 is active
	I1212 23:36:08.761910  133802 main.go:141] libmachine: (newest-cni-439645) Getting domain xml...
	I1212 23:36:08.762809  133802 main.go:141] libmachine: (newest-cni-439645) Creating domain...
	I1212 23:36:10.109571  133802 main.go:141] libmachine: (newest-cni-439645) Waiting to get IP...
	I1212 23:36:10.110345  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.110871  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.110904  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.110828  133824 retry.go:31] will retry after 212.086514ms: waiting for machine to come up
	I1212 23:36:10.325657  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.326256  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.326288  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.326191  133824 retry.go:31] will retry after 381.394576ms: waiting for machine to come up
	I1212 23:36:10.708787  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:10.709308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:10.709338  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:10.709263  133824 retry.go:31] will retry after 454.077778ms: waiting for machine to come up
	I1212 23:36:11.164751  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:11.165360  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:11.165396  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:11.165317  133824 retry.go:31] will retry after 398.894065ms: waiting for machine to come up
	I1212 23:36:11.565921  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:11.566445  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:11.566480  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:11.566380  133824 retry.go:31] will retry after 617.446132ms: waiting for machine to come up
	I1212 23:36:12.185273  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:12.185806  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:12.185841  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:12.185709  133824 retry.go:31] will retry after 850.635578ms: waiting for machine to come up
	I1212 23:36:13.037840  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:13.038356  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:13.038389  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:13.038282  133824 retry.go:31] will retry after 1.002335455s: waiting for machine to come up
	I1212 23:36:14.042954  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:14.043504  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:14.043545  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:14.043463  133824 retry.go:31] will retry after 1.341938926s: waiting for machine to come up
	I1212 23:36:15.387072  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:15.387591  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:15.387635  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:15.387529  133824 retry.go:31] will retry after 1.597064845s: waiting for machine to come up
	I1212 23:36:16.986295  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:16.986840  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:16.986871  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:16.986765  133824 retry.go:31] will retry after 1.571135704s: waiting for machine to come up
	I1212 23:36:18.559590  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:18.560165  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:18.560212  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:18.560084  133824 retry.go:31] will retry after 2.078148594s: waiting for machine to come up
	I1212 23:36:20.641150  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:20.641588  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:20.641620  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:20.641527  133824 retry.go:31] will retry after 3.259272182s: waiting for machine to come up
	I1212 23:36:23.902961  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:23.903396  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:23.903419  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:23.903368  133824 retry.go:31] will retry after 4.378786206s: waiting for machine to come up
	I1212 23:36:28.286837  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:28.287251  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find current IP address of domain newest-cni-439645 in network mk-newest-cni-439645
	I1212 23:36:28.287284  133802 main.go:141] libmachine: (newest-cni-439645) DBG | I1212 23:36:28.287188  133824 retry.go:31] will retry after 3.993578265s: waiting for machine to come up
	I1212 23:36:32.284308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.284709  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has current primary IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.284739  133802 main.go:141] libmachine: (newest-cni-439645) Found IP for machine: 192.168.61.126
	I1212 23:36:32.284753  133802 main.go:141] libmachine: (newest-cni-439645) Reserving static IP address...
	I1212 23:36:32.285063  133802 main.go:141] libmachine: (newest-cni-439645) DBG | unable to find host DHCP lease matching {name: "newest-cni-439645", mac: "52:54:00:99:10:d4", ip: "192.168.61.126"} in network mk-newest-cni-439645
	I1212 23:36:32.365787  133802 main.go:141] libmachine: (newest-cni-439645) Reserved static IP address: 192.168.61.126
	I1212 23:36:32.365863  133802 main.go:141] libmachine: (newest-cni-439645) Waiting for SSH to be available...
	I1212 23:36:32.365878  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Getting to WaitForSSH function...
	I1212 23:36:32.368389  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.368825  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.368856  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.368999  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using SSH client type: external
	I1212 23:36:32.369031  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using SSH private key: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa (-rw-------)
	I1212 23:36:32.369079  133802 main.go:141] libmachine: (newest-cni-439645) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 23:36:32.369095  133802 main.go:141] libmachine: (newest-cni-439645) DBG | About to run SSH command:
	I1212 23:36:32.369110  133802 main.go:141] libmachine: (newest-cni-439645) DBG | exit 0
	I1212 23:36:32.463168  133802 main.go:141] libmachine: (newest-cni-439645) DBG | SSH cmd err, output: <nil>: 
	I1212 23:36:32.463437  133802 main.go:141] libmachine: (newest-cni-439645) KVM machine creation complete!
	I1212 23:36:32.463806  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:32.464520  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:32.464754  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:32.464947  133802 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 23:36:32.464967  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:36:32.466474  133802 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 23:36:32.466493  133802 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 23:36:32.466500  133802 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 23:36:32.466506  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.469172  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.469553  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.469586  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.469718  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.469925  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.470103  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.470247  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.470448  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.470816  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.470836  133802 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 23:36:32.594684  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:36:32.594730  133802 main.go:141] libmachine: Detecting the provisioner...
	I1212 23:36:32.594745  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.597756  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.598098  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.598124  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.598250  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.598474  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.598645  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.598802  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.599050  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.599474  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.599494  133802 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 23:36:32.724121  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g161fa11-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 23:36:32.724215  133802 main.go:141] libmachine: found compatible host: buildroot
	I1212 23:36:32.724226  133802 main.go:141] libmachine: Provisioning with buildroot...
	I1212 23:36:32.724236  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:32.724485  133802 buildroot.go:166] provisioning hostname "newest-cni-439645"
	I1212 23:36:32.724519  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:32.724731  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.727301  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.727695  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.727739  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.727904  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.728108  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.728272  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.728398  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.728575  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.728902  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.728919  133802 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-439645 && echo "newest-cni-439645" | sudo tee /etc/hostname
	I1212 23:36:32.869225  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-439645
	
	I1212 23:36:32.869262  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:32.872305  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.872650  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:32.872683  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:32.872833  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:32.873037  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.873268  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:32.873467  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:32.873669  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:32.873997  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:32.874022  133802 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-439645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-439645/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-439645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 23:36:33.016660  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 23:36:33.016698  133802 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17761-76611/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-76611/.minikube}
	I1212 23:36:33.016737  133802 buildroot.go:174] setting up certificates
	I1212 23:36:33.016752  133802 provision.go:83] configureAuth start
	I1212 23:36:33.016772  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetMachineName
	I1212 23:36:33.017098  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.020073  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.020451  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.020482  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.020593  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.022775  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.023111  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.023146  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.023260  133802 provision.go:138] copyHostCerts
	I1212 23:36:33.023320  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem, removing ...
	I1212 23:36:33.023355  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem
	I1212 23:36:33.023426  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/ca.pem (1082 bytes)
	I1212 23:36:33.023580  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem, removing ...
	I1212 23:36:33.023595  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem
	I1212 23:36:33.023662  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/cert.pem (1123 bytes)
	I1212 23:36:33.023751  133802 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem, removing ...
	I1212 23:36:33.023763  133802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem
	I1212 23:36:33.023794  133802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-76611/.minikube/key.pem (1675 bytes)
	I1212 23:36:33.023890  133802 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem org=jenkins.newest-cni-439645 san=[192.168.61.126 192.168.61.126 localhost 127.0.0.1 minikube newest-cni-439645]
	I1212 23:36:33.130713  133802 provision.go:172] copyRemoteCerts
	I1212 23:36:33.130786  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 23:36:33.130811  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.133674  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.134044  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.134077  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.134252  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.134463  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.134630  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.134791  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.229111  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 23:36:33.253806  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 23:36:33.279194  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 23:36:33.305549  133802 provision.go:86] duration metric: configureAuth took 288.773724ms
	I1212 23:36:33.305584  133802 buildroot.go:189] setting minikube options for container-runtime
	I1212 23:36:33.305828  133802 config.go:182] Loaded profile config "newest-cni-439645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:36:33.305928  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.309007  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.309393  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.309442  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.309685  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.309905  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.310082  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.310269  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.310522  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:33.310969  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:33.311001  133802 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 23:36:33.659022  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 23:36:33.659053  133802 main.go:141] libmachine: Checking connection to Docker...
	I1212 23:36:33.659062  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetURL
	I1212 23:36:33.660336  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Using libvirt version 6000000
	I1212 23:36:33.662825  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.663254  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.663328  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.663521  133802 main.go:141] libmachine: Docker is up and running!
	I1212 23:36:33.663541  133802 main.go:141] libmachine: Reticulating splines...
	I1212 23:36:33.663551  133802 client.go:171] LocalClient.Create took 25.354580567s
	I1212 23:36:33.663576  133802 start.go:167] duration metric: libmachine.API.Create for "newest-cni-439645" took 25.354681666s
	I1212 23:36:33.663587  133802 start.go:300] post-start starting for "newest-cni-439645" (driver="kvm2")
	I1212 23:36:33.663598  133802 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 23:36:33.663621  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.663956  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 23:36:33.663990  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.666473  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.666820  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.666853  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.667024  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.667278  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.667455  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.667634  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.761451  133802 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 23:36:33.766167  133802 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 23:36:33.766204  133802 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/addons for local assets ...
	I1212 23:36:33.766276  133802 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-76611/.minikube/files for local assets ...
	I1212 23:36:33.766346  133802 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem -> 838252.pem in /etc/ssl/certs
	I1212 23:36:33.766431  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 23:36:33.775657  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:36:33.801999  133802 start.go:303] post-start completed in 138.398519ms
	I1212 23:36:33.802063  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetConfigRaw
	I1212 23:36:33.802819  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.806048  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.806506  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.806541  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.806879  133802 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json ...
	I1212 23:36:33.807132  133802 start.go:128] duration metric: createHost completed in 25.517805954s
	I1212 23:36:33.807166  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.810015  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.810489  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.810523  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.810700  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.810949  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.811121  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.811266  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.811478  133802 main.go:141] libmachine: Using SSH client type: native
	I1212 23:36:33.811830  133802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.61.126 22 <nil> <nil>}
	I1212 23:36:33.811843  133802 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 23:36:33.940600  133802 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702424193.918938184
	
	I1212 23:36:33.940623  133802 fix.go:206] guest clock: 1702424193.918938184
	I1212 23:36:33.940630  133802 fix.go:219] Guest: 2023-12-12 23:36:33.918938184 +0000 UTC Remote: 2023-12-12 23:36:33.807148127 +0000 UTC m=+25.658409212 (delta=111.790057ms)
	I1212 23:36:33.940685  133802 fix.go:190] guest clock delta is within tolerance: 111.790057ms
	I1212 23:36:33.940696  133802 start.go:83] releasing machines lock for "newest-cni-439645", held for 25.651447824s
	I1212 23:36:33.940720  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.941043  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:33.944022  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.944345  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.944380  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.944480  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945025  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945203  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:36:33.945298  133802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 23:36:33.945360  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.945440  133802 ssh_runner.go:195] Run: cat /version.json
	I1212 23:36:33.945462  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:36:33.948277  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948308  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948626  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.948657  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948688  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:33.948706  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:33.948786  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.948902  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:36:33.949007  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.949229  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.949244  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:36:33.949425  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:36:33.949501  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:33.949585  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:36:34.042106  133802 ssh_runner.go:195] Run: systemctl --version
	I1212 23:36:34.067295  133802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 23:36:34.232321  133802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 23:36:34.239110  133802 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 23:36:34.239193  133802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 23:36:34.255820  133802 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 23:36:34.255846  133802 start.go:475] detecting cgroup driver to use...
	I1212 23:36:34.255922  133802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 23:36:34.270214  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 23:36:34.282323  133802 docker.go:203] disabling cri-docker service (if available) ...
	I1212 23:36:34.282395  133802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 23:36:34.295221  133802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 23:36:34.307456  133802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 23:36:34.424362  133802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 23:36:34.550588  133802 docker.go:219] disabling docker service ...
	I1212 23:36:34.550666  133802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 23:36:34.564243  133802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 23:36:34.576734  133802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 23:36:34.685970  133802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 23:36:34.806980  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 23:36:34.822300  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 23:36:34.841027  133802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 23:36:34.841096  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.850879  133802 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 23:36:34.850961  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.860041  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.869086  133802 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 23:36:34.879476  133802 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 23:36:34.889070  133802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 23:36:34.897768  133802 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 23:36:34.897820  133802 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 23:36:34.911555  133802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 23:36:34.920330  133802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 23:36:35.030591  133802 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 23:36:35.201172  133802 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 23:36:35.201259  133802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 23:36:35.207456  133802 start.go:543] Will wait 60s for crictl version
	I1212 23:36:35.207528  133802 ssh_runner.go:195] Run: which crictl
	I1212 23:36:35.211996  133802 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 23:36:35.258165  133802 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1212 23:36:35.258298  133802 ssh_runner.go:195] Run: crio --version
	I1212 23:36:35.308715  133802 ssh_runner.go:195] Run: crio --version
	I1212 23:36:35.361717  133802 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I1212 23:36:35.363160  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetIP
	I1212 23:36:35.365887  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:35.366260  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:36:35.366297  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:36:35.366516  133802 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 23:36:35.370757  133802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:36:35.385019  133802 localpath.go:92] copying /home/jenkins/minikube-integration/17761-76611/.minikube/client.crt -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.crt
	I1212 23:36:35.385199  133802 localpath.go:117] copying /home/jenkins/minikube-integration/17761-76611/.minikube/client.key -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.key
	I1212 23:36:35.387269  133802 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 23:36:35.388849  133802 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 23:36:35.388917  133802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:36:35.425861  133802 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I1212 23:36:35.425931  133802 ssh_runner.go:195] Run: which lz4
	I1212 23:36:35.430186  133802 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 23:36:35.434663  133802 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 23:36:35.434700  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401739178 bytes)
	I1212 23:36:37.061349  133802 crio.go:444] Took 1.631183 seconds to copy over tarball
	I1212 23:36:37.061464  133802 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 23:36:39.637255  133802 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575759736s)
	I1212 23:36:39.637291  133802 crio.go:451] Took 2.575898 seconds to extract the tarball
	I1212 23:36:39.637303  133802 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 23:36:39.677494  133802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 23:36:39.766035  133802 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 23:36:39.766059  133802 cache_images.go:84] Images are preloaded, skipping loading
	I1212 23:36:39.766189  133802 ssh_runner.go:195] Run: crio config
	I1212 23:36:39.833582  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:39.833614  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:39.833641  133802 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1212 23:36:39.833669  133802 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.126 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-439645 NodeName:newest-cni-439645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 23:36:39.833860  133802 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-439645"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 23:36:39.833987  133802 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-439645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 23:36:39.834070  133802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I1212 23:36:39.844985  133802 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 23:36:39.845069  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 23:36:39.856286  133802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1212 23:36:39.874343  133802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1212 23:36:39.892075  133802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1212 23:36:39.912183  133802 ssh_runner.go:195] Run: grep 192.168.61.126	control-plane.minikube.internal$ /etc/hosts
	I1212 23:36:39.916201  133802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 23:36:39.928092  133802 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645 for IP: 192.168.61.126
	I1212 23:36:39.928126  133802 certs.go:190] acquiring lock for shared ca certs: {Name:mka69d87bb8d52665cf9447f83b78e5d881b0569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:39.928286  133802 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key
	I1212 23:36:39.928341  133802 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key
	I1212 23:36:39.928452  133802 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/client.key
	I1212 23:36:39.928484  133802 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a
	I1212 23:36:39.928502  133802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a with IP's: [192.168.61.126 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 23:36:40.086731  133802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a ...
	I1212 23:36:40.086762  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a: {Name:mk84eb32c33b5eeb3ae8582be9a9ef465e3ffdf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.086933  133802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a ...
	I1212 23:36:40.086947  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a: {Name:mk3956332b5f04ef30f2f27bb7fd660cd7454547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.087015  133802 certs.go:337] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt.829f218a -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt
	I1212 23:36:40.087074  133802 certs.go:341] copying /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key.829f218a -> /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key
	I1212 23:36:40.087122  133802 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key
	I1212 23:36:40.087136  133802 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt with IP's: []
	I1212 23:36:40.232772  133802 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt ...
	I1212 23:36:40.232814  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt: {Name:mk67436a71f51ed921cc97aac7a15bc922b20637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.232974  133802 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key ...
	I1212 23:36:40.232993  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key: {Name:mk50b06404e6bba8454342d2a726ff327c0cec64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:36:40.233148  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem (1338 bytes)
	W1212 23:36:40.233183  133802 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825_empty.pem, impossibly tiny 0 bytes
	I1212 23:36:40.233191  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 23:36:40.233214  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/ca.pem (1082 bytes)
	I1212 23:36:40.233239  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/cert.pem (1123 bytes)
	I1212 23:36:40.233260  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/certs/home/jenkins/minikube-integration/17761-76611/.minikube/certs/key.pem (1675 bytes)
	I1212 23:36:40.233303  133802 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem (1708 bytes)
	I1212 23:36:40.233973  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 23:36:40.258898  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 23:36:40.281796  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 23:36:40.306279  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 23:36:40.330350  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 23:36:40.354800  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 23:36:40.381255  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 23:36:40.406517  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 23:36:40.432781  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 23:36:40.457098  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/certs/83825.pem --> /usr/share/ca-certificates/83825.pem (1338 bytes)
	I1212 23:36:40.481498  133802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/ssl/certs/838252.pem --> /usr/share/ca-certificates/838252.pem (1708 bytes)
	I1212 23:36:40.505394  133802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 23:36:40.521926  133802 ssh_runner.go:195] Run: openssl version
	I1212 23:36:40.527951  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 23:36:40.539530  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.544979  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.545055  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 23:36:40.551367  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 23:36:40.562238  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83825.pem && ln -fs /usr/share/ca-certificates/83825.pem /etc/ssl/certs/83825.pem"
	I1212 23:36:40.573393  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.578071  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:11 /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.578118  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83825.pem
	I1212 23:36:40.583694  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/83825.pem /etc/ssl/certs/51391683.0"
	I1212 23:36:40.594194  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/838252.pem && ln -fs /usr/share/ca-certificates/838252.pem /etc/ssl/certs/838252.pem"
	I1212 23:36:40.605923  133802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.610736  133802 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:11 /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.610801  133802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/838252.pem
	I1212 23:36:40.616424  133802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/838252.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 23:36:40.627019  133802 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 23:36:40.631732  133802 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 23:36:40.631791  133802 kubeadm.go:404] StartCluster: {Name:newest-cni-439645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-439645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 23:36:40.631885  133802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 23:36:40.631927  133802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 23:36:40.674482  133802 cri.go:89] found id: ""
	I1212 23:36:40.674562  133802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 23:36:40.685198  133802 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 23:36:40.697550  133802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 23:36:40.709344  133802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 23:36:40.709392  133802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 23:36:40.826867  133802 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I1212 23:36:40.826985  133802 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 23:36:41.099627  133802 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 23:36:41.099767  133802 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 23:36:41.099941  133802 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 23:36:41.379678  133802 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 23:36:41.472054  133802 out.go:204]   - Generating certificates and keys ...
	I1212 23:36:41.472187  133802 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 23:36:41.472277  133802 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 23:36:41.814537  133802 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 23:36:42.030569  133802 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 23:36:42.212440  133802 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 23:36:42.408531  133802 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 23:36:42.596187  133802 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 23:36:42.596349  133802 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-439645] and IPs [192.168.61.126 127.0.0.1 ::1]
	I1212 23:36:42.835041  133802 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 23:36:42.835264  133802 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-439645] and IPs [192.168.61.126 127.0.0.1 ::1]
	I1212 23:36:42.965385  133802 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 23:36:43.404498  133802 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 23:36:43.484084  133802 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 23:36:43.484419  133802 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 23:36:43.679972  133802 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 23:36:43.897206  133802 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 23:36:44.144262  133802 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 23:36:44.342356  133802 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 23:36:44.516812  133802 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 23:36:44.517253  133802 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 23:36:44.520476  133802 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 23:36:44.522075  133802 out.go:204]   - Booting up control plane ...
	I1212 23:36:44.522243  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 23:36:44.522359  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 23:36:44.522885  133802 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 23:36:44.540187  133802 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 23:36:44.541123  133802 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 23:36:44.541258  133802 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 23:36:44.679611  133802 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 23:36:52.682546  133802 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005372 seconds
	I1212 23:36:52.699907  133802 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 23:36:52.716862  133802 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 23:36:53.253165  133802 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 23:36:53.253380  133802 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-439645 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 23:36:53.769781  133802 kubeadm.go:322] [bootstrap-token] Using token: v2icbq.kf108uw3b7rzt7qu
	I1212 23:36:53.771411  133802 out.go:204]   - Configuring RBAC rules ...
	I1212 23:36:53.771550  133802 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 23:36:53.785275  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 23:36:53.797359  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 23:36:53.803152  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 23:36:53.809783  133802 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 23:36:53.822923  133802 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 23:36:53.839260  133802 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 23:36:54.108016  133802 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 23:36:54.206851  133802 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 23:36:54.207818  133802 kubeadm.go:322] 
	I1212 23:36:54.207927  133802 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 23:36:54.207951  133802 kubeadm.go:322] 
	I1212 23:36:54.208061  133802 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 23:36:54.208073  133802 kubeadm.go:322] 
	I1212 23:36:54.208106  133802 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 23:36:54.208190  133802 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 23:36:54.208263  133802 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 23:36:54.208272  133802 kubeadm.go:322] 
	I1212 23:36:54.208379  133802 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 23:36:54.208391  133802 kubeadm.go:322] 
	I1212 23:36:54.208444  133802 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 23:36:54.208465  133802 kubeadm.go:322] 
	I1212 23:36:54.208548  133802 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 23:36:54.208620  133802 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 23:36:54.208718  133802 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 23:36:54.208731  133802 kubeadm.go:322] 
	I1212 23:36:54.208872  133802 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 23:36:54.208985  133802 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 23:36:54.209004  133802 kubeadm.go:322] 
	I1212 23:36:54.209130  133802 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v2icbq.kf108uw3b7rzt7qu \
	I1212 23:36:54.209285  133802 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 \
	I1212 23:36:54.209329  133802 kubeadm.go:322] 	--control-plane 
	I1212 23:36:54.209345  133802 kubeadm.go:322] 
	I1212 23:36:54.209421  133802 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 23:36:54.209437  133802 kubeadm.go:322] 
	I1212 23:36:54.209512  133802 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v2icbq.kf108uw3b7rzt7qu \
	I1212 23:36:54.209612  133802 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cf451e9de31410137a9591dd9b0dc2bd7e93f286a9b116a1d0a376e773a32f42 
	I1212 23:36:54.210259  133802 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 23:36:54.210302  133802 cni.go:84] Creating CNI manager for ""
	I1212 23:36:54.210325  133802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 23:36:54.212174  133802 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 23:36:54.213539  133802 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 23:36:54.245016  133802 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 23:36:54.272864  133802 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 23:36:54.272931  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.272968  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=newest-cni-439645 minikube.k8s.io/updated_at=2023_12_12T23_36_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.379602  133802 ops.go:34] apiserver oom_adj: -16
	I1212 23:36:54.633228  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:54.739769  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:55.351504  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:55.851143  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:56.351206  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:56.851623  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:57.351305  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:57.851799  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:58.351559  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:58.850971  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:59.351009  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:36:59.851133  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:00.351899  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:00.851352  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:01.351666  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:01.851015  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:02.351018  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:02.850982  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:03.351233  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:03.851881  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:04.351793  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:04.851071  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:05.351199  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:05.851905  133802 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 23:37:06.014064  133802 kubeadm.go:1088] duration metric: took 11.741191092s to wait for elevateKubeSystemPrivileges.
	I1212 23:37:06.014111  133802 kubeadm.go:406] StartCluster complete in 25.38232392s
	I1212 23:37:06.014140  133802 settings.go:142] acquiring lock: {Name:mk63a7317f157298d3f2d571dd9a6545e40f6012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:37:06.014240  133802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 23:37:06.016995  133802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-76611/kubeconfig: {Name:mkd30784daf64363feb2413d937b99eb8cd15a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 23:37:06.017348  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 23:37:06.017516  133802 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 23:37:06.017602  133802 config.go:182] Loaded profile config "newest-cni-439645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:37:06.017622  133802 addons.go:69] Setting default-storageclass=true in profile "newest-cni-439645"
	I1212 23:37:06.017648  133802 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-439645"
	I1212 23:37:06.017602  133802 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-439645"
	I1212 23:37:06.017663  133802 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-439645"
	I1212 23:37:06.017718  133802 host.go:66] Checking if "newest-cni-439645" exists ...
	I1212 23:37:06.018178  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.018192  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.018229  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.018346  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.035695  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I1212 23:37:06.035705  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41013
	I1212 23:37:06.036249  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.036338  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.036854  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.036875  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.037009  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.037027  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.037420  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.037460  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.037591  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.038135  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.038185  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.041698  133802 addons.go:231] Setting addon default-storageclass=true in "newest-cni-439645"
	I1212 23:37:06.041755  133802 host.go:66] Checking if "newest-cni-439645" exists ...
	I1212 23:37:06.042232  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.042290  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.054882  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I1212 23:37:06.055360  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.055977  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.056004  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.056361  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.056554  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.057908  133802 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-439645" context rescaled to 1 replicas
	I1212 23:37:06.057952  133802 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 23:37:06.061829  133802 out.go:177] * Verifying Kubernetes components...
	I1212 23:37:06.058630  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:37:06.063335  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34507
	I1212 23:37:06.063863  133802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 23:37:06.064075  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.065638  133802 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 23:37:06.064514  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.067314  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.067432  133802 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:37:06.067459  133802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 23:37:06.067484  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:37:06.067889  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.068492  133802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:06.068544  133802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:06.071038  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.071327  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:37:06.071427  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.071615  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:37:06.071835  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:37:06.071952  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:37:06.072334  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:37:06.086051  133802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I1212 23:37:06.086557  133802 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:06.087406  133802 main.go:141] libmachine: Using API Version  1
	I1212 23:37:06.087422  133802 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:06.087816  133802 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:06.088051  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:06.089796  133802 main.go:141] libmachine: (newest-cni-439645) Calling .DriverName
	I1212 23:37:06.090067  133802 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 23:37:06.090086  133802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 23:37:06.090116  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHHostname
	I1212 23:37:06.093365  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.093786  133802 main.go:141] libmachine: (newest-cni-439645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:10:d4", ip: ""} in network mk-newest-cni-439645: {Iface:virbr3 ExpiryTime:2023-12-13 00:36:25 +0000 UTC Type:0 Mac:52:54:00:99:10:d4 Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:newest-cni-439645 Clientid:01:52:54:00:99:10:d4}
	I1212 23:37:06.093828  133802 main.go:141] libmachine: (newest-cni-439645) DBG | domain newest-cni-439645 has defined IP address 192.168.61.126 and MAC address 52:54:00:99:10:d4 in network mk-newest-cni-439645
	I1212 23:37:06.093952  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHPort
	I1212 23:37:06.094153  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHKeyPath
	I1212 23:37:06.094345  133802 main.go:141] libmachine: (newest-cni-439645) Calling .GetSSHUsername
	I1212 23:37:06.094511  133802 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/newest-cni-439645/id_rsa Username:docker}
	I1212 23:37:06.207823  133802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 23:37:06.209786  133802 api_server.go:52] waiting for apiserver process to appear ...
	I1212 23:37:06.209852  133802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 23:37:06.224108  133802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 23:37:06.279819  133802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 23:37:06.792429  133802 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 23:37:06.792532  133802 api_server.go:72] duration metric: took 734.539895ms to wait for apiserver process to appear ...
	I1212 23:37:06.792561  133802 api_server.go:88] waiting for apiserver healthz status ...
	I1212 23:37:06.792582  133802 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I1212 23:37:06.801498  133802 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I1212 23:37:06.813707  133802 api_server.go:141] control plane version: v1.29.0-rc.2
	I1212 23:37:06.813743  133802 api_server.go:131] duration metric: took 21.176357ms to wait for apiserver health ...
	I1212 23:37:06.813754  133802 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 23:37:06.842793  133802 system_pods.go:59] 5 kube-system pods found
	I1212 23:37:06.842831  133802 system_pods.go:61] "etcd-newest-cni-439645" [7568458a-44a4-460a-8f19-50a0b12ce47e] Running
	I1212 23:37:06.842837  133802 system_pods.go:61] "kube-apiserver-newest-cni-439645" [be37172f-a2c6-43f0-ba6f-026b57424206] Running
	I1212 23:37:06.842841  133802 system_pods.go:61] "kube-controller-manager-newest-cni-439645" [949056cc-9959-4160-bf82-bc9e3afbd86f] Running
	I1212 23:37:06.842845  133802 system_pods.go:61] "kube-proxy-9jtg7" [3c4c2367-6254-4d81-83f0-054b4d33515b] Pending
	I1212 23:37:06.842849  133802 system_pods.go:61] "kube-scheduler-newest-cni-439645" [64a5920a-0055-457c-8f06-e81450e5d8af] Running
	I1212 23:37:06.842858  133802 system_pods.go:74] duration metric: took 29.095739ms to wait for pod list to return data ...
	I1212 23:37:06.842869  133802 default_sa.go:34] waiting for default service account to be created ...
	I1212 23:37:06.860986  133802 default_sa.go:45] found service account: "default"
	I1212 23:37:06.861034  133802 default_sa.go:55] duration metric: took 18.151161ms for default service account to be created ...
	I1212 23:37:06.861049  133802 kubeadm.go:581] duration metric: took 803.062192ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1212 23:37:06.861071  133802 node_conditions.go:102] verifying NodePressure condition ...
	I1212 23:37:06.874325  133802 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 23:37:06.874362  133802 node_conditions.go:123] node cpu capacity is 2
	I1212 23:37:06.874378  133802 node_conditions.go:105] duration metric: took 13.301256ms to run NodePressure ...
	I1212 23:37:06.874393  133802 start.go:228] waiting for startup goroutines ...
	I1212 23:37:07.110459  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.110492  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.110568  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.110646  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.112440  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.112454  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.112476  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.112487  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.112499  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.112526  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.112541  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.112572  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.112585  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.112597  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.113071  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.113084  133802 main.go:141] libmachine: (newest-cni-439645) DBG | Closing plugin on server side
	I1212 23:37:07.113115  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.113133  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.113087  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.113349  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.164813  133802 main.go:141] libmachine: Making call to close driver server
	I1212 23:37:07.164835  133802 main.go:141] libmachine: (newest-cni-439645) Calling .Close
	I1212 23:37:07.165137  133802 main.go:141] libmachine: Successfully made call to close driver server
	I1212 23:37:07.165171  133802 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 23:37:07.166765  133802 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 23:37:07.167996  133802 addons.go:502] enable addons completed in 1.150489286s: enabled=[storage-provisioner default-storageclass]
	I1212 23:37:07.168052  133802 start.go:233] waiting for cluster config update ...
	I1212 23:37:07.168068  133802 start.go:242] writing updated cluster config ...
	I1212 23:37:07.168346  133802 ssh_runner.go:195] Run: rm -f paused
	I1212 23:37:07.239467  133802 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.2 (minor skew: 1)
	I1212 23:37:07.241846  133802 out.go:177] * Done! kubectl is now configured to use "newest-cni-439645" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-12 23:17:40 UTC, ends at Tue 2023-12-12 23:37:47 UTC. --
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.642738719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424267642720275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a2cb803f-06f1-4d66-9c52-e46fc3405a99 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.643399645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4273a1d9-565b-46ff-9ff4-e17c858dbc41 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.643526234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4273a1d9-565b-46ff-9ff4-e17c858dbc41 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.643697106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed,PodSandboxId:129efa3c9d2cf04a810cb065d63a0cf271af463484b135d013d2332f8cea6d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423395644393401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a660d9e-2a10-49de-bb1d-fd237aa3345e,},Annotations:map[string]string{io.kubernetes.container.hash: c8c28f82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30,PodSandboxId:66886a4d064b7b08406e5a4bc6d23058a41ba996b9884b4f64d6754c103875ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423395120296407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nb6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79e36cc-eaa9-45da-8a3e-414424129991,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1ca202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d,PodSandboxId:c2c36a7b6bfa8a65d8348c69a7dae56e31c200d9434a246e458acdb3224fb7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423393909712676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qz4fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2e604-2026-486a-befa-f5a310cb017e,},Annotations:map[string]string{io.kubernetes.container.hash: 706ffae6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb,PodSandboxId:582bb3c4a02a4eb6a565070f2c570cde9a20bd720c669b2b7cbfc40de1b5825e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423370602662947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdf9d27fc79998ff10cd42f97ebf247,},An
notations:map[string]string{io.kubernetes.container.hash: 5c4b8ad8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67,PodSandboxId:cc828429d9674e5064b1d6e5e61f52e36059cfba7733c796173ca628407718ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423370021875478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab75e7f925ea4b422d1ed1ea4cb05b,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c,PodSandboxId:7d75363a0bc42756e54e2ecf0d3d3d81e8277decd303de104e97b962b5c75345,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423369706580830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30745ca1cb04441d80b995fd749431e1,},Annotations:map[string
]string{io.kubernetes.container.hash: 76ab2f46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e,PodSandboxId:352e4d00108afe8179d427389e3ef1ec10ee248650388144f0b967f8b32a759a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423369588842263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76bc9ee5c92f8a661163c2be8ef3952
1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4273a1d9-565b-46ff-9ff4-e17c858dbc41 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.683815943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c2483611-4840-49d7-ae83-ea3ba7966a97 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.683875821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c2483611-4840-49d7-ae83-ea3ba7966a97 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.685295539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9b82e82f-e9ed-4a00-82b9-b23eeb8ce799 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.685749838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424267685734964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9b82e82f-e9ed-4a00-82b9-b23eeb8ce799 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.687067727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=060e2c91-16d1-4cd2-a50b-f114aa77b1ce name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.687121448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=060e2c91-16d1-4cd2-a50b-f114aa77b1ce name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.687335490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed,PodSandboxId:129efa3c9d2cf04a810cb065d63a0cf271af463484b135d013d2332f8cea6d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423395644393401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a660d9e-2a10-49de-bb1d-fd237aa3345e,},Annotations:map[string]string{io.kubernetes.container.hash: c8c28f82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30,PodSandboxId:66886a4d064b7b08406e5a4bc6d23058a41ba996b9884b4f64d6754c103875ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423395120296407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nb6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79e36cc-eaa9-45da-8a3e-414424129991,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1ca202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d,PodSandboxId:c2c36a7b6bfa8a65d8348c69a7dae56e31c200d9434a246e458acdb3224fb7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423393909712676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qz4fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2e604-2026-486a-befa-f5a310cb017e,},Annotations:map[string]string{io.kubernetes.container.hash: 706ffae6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb,PodSandboxId:582bb3c4a02a4eb6a565070f2c570cde9a20bd720c669b2b7cbfc40de1b5825e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423370602662947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdf9d27fc79998ff10cd42f97ebf247,},An
notations:map[string]string{io.kubernetes.container.hash: 5c4b8ad8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67,PodSandboxId:cc828429d9674e5064b1d6e5e61f52e36059cfba7733c796173ca628407718ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423370021875478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab75e7f925ea4b422d1ed1ea4cb05b,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c,PodSandboxId:7d75363a0bc42756e54e2ecf0d3d3d81e8277decd303de104e97b962b5c75345,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423369706580830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30745ca1cb04441d80b995fd749431e1,},Annotations:map[string
]string{io.kubernetes.container.hash: 76ab2f46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e,PodSandboxId:352e4d00108afe8179d427389e3ef1ec10ee248650388144f0b967f8b32a759a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423369588842263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76bc9ee5c92f8a661163c2be8ef3952
1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=060e2c91-16d1-4cd2-a50b-f114aa77b1ce name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.728578295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=63e2a10f-476b-4262-bfb9-5b00e053e9e6 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.728638467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=63e2a10f-476b-4262-bfb9-5b00e053e9e6 name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.729998541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=72f55261-991a-4b55-9764-9f52f30a2e48 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.730425146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424267730408921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=72f55261-991a-4b55-9764-9f52f30a2e48 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.731169171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=09ac4c7c-441c-4772-92a2-980a60086ea2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.731215572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=09ac4c7c-441c-4772-92a2-980a60086ea2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.731386501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed,PodSandboxId:129efa3c9d2cf04a810cb065d63a0cf271af463484b135d013d2332f8cea6d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423395644393401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a660d9e-2a10-49de-bb1d-fd237aa3345e,},Annotations:map[string]string{io.kubernetes.container.hash: c8c28f82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30,PodSandboxId:66886a4d064b7b08406e5a4bc6d23058a41ba996b9884b4f64d6754c103875ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423395120296407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nb6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79e36cc-eaa9-45da-8a3e-414424129991,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1ca202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d,PodSandboxId:c2c36a7b6bfa8a65d8348c69a7dae56e31c200d9434a246e458acdb3224fb7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423393909712676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qz4fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2e604-2026-486a-befa-f5a310cb017e,},Annotations:map[string]string{io.kubernetes.container.hash: 706ffae6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb,PodSandboxId:582bb3c4a02a4eb6a565070f2c570cde9a20bd720c669b2b7cbfc40de1b5825e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423370602662947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdf9d27fc79998ff10cd42f97ebf247,},An
notations:map[string]string{io.kubernetes.container.hash: 5c4b8ad8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67,PodSandboxId:cc828429d9674e5064b1d6e5e61f52e36059cfba7733c796173ca628407718ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423370021875478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab75e7f925ea4b422d1ed1ea4cb05b,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c,PodSandboxId:7d75363a0bc42756e54e2ecf0d3d3d81e8277decd303de104e97b962b5c75345,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423369706580830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30745ca1cb04441d80b995fd749431e1,},Annotations:map[string
]string{io.kubernetes.container.hash: 76ab2f46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e,PodSandboxId:352e4d00108afe8179d427389e3ef1ec10ee248650388144f0b967f8b32a759a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423369588842263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76bc9ee5c92f8a661163c2be8ef3952
1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=09ac4c7c-441c-4772-92a2-980a60086ea2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.768590976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dd9ee86e-9d9d-4da8-aa9f-9159bc939cca name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.768648168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dd9ee86e-9d9d-4da8-aa9f-9159bc939cca name=/runtime.v1.RuntimeService/Version
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.769855361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=29e5642f-905a-4cfc-9d00-710f51cabc8c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.770231456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1702424267770219351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=29e5642f-905a-4cfc-9d00-710f51cabc8c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.770832092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=da717666-a3c3-4a0b-86ab-260ec98b00c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.770875450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=da717666-a3c3-4a0b-86ab-260ec98b00c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 23:37:47 embed-certs-809120 crio[710]: time="2023-12-12 23:37:47.771260336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed,PodSandboxId:129efa3c9d2cf04a810cb065d63a0cf271af463484b135d013d2332f8cea6d01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1702423395644393401,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a660d9e-2a10-49de-bb1d-fd237aa3345e,},Annotations:map[string]string{io.kubernetes.container.hash: c8c28f82,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30,PodSandboxId:66886a4d064b7b08406e5a4bc6d23058a41ba996b9884b4f64d6754c103875ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1702423395120296407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4nb6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79e36cc-eaa9-45da-8a3e-414424129991,},Annotations:map[string]string{io.kubernetes.container.hash: 2e1ca202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d,PodSandboxId:c2c36a7b6bfa8a65d8348c69a7dae56e31c200d9434a246e458acdb3224fb7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1702423393909712676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qz4fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a2e604-2026-486a-befa-f5a310cb017e,},Annotations:map[string]string{io.kubernetes.container.hash: 706ffae6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb,PodSandboxId:582bb3c4a02a4eb6a565070f2c570cde9a20bd720c669b2b7cbfc40de1b5825e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1702423370602662947,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fdf9d27fc79998ff10cd42f97ebf247,},An
notations:map[string]string{io.kubernetes.container.hash: 5c4b8ad8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67,PodSandboxId:cc828429d9674e5064b1d6e5e61f52e36059cfba7733c796173ca628407718ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1702423370021875478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab75e7f925ea4b422d1ed1ea4cb05b,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c,PodSandboxId:7d75363a0bc42756e54e2ecf0d3d3d81e8277decd303de104e97b962b5c75345,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1702423369706580830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30745ca1cb04441d80b995fd749431e1,},Annotations:map[string
]string{io.kubernetes.container.hash: 76ab2f46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e,PodSandboxId:352e4d00108afe8179d427389e3ef1ec10ee248650388144f0b967f8b32a759a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1702423369588842263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-809120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76bc9ee5c92f8a661163c2be8ef3952
1,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=da717666-a3c3-4a0b-86ab-260ec98b00c5 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c34f627c7cd17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   129efa3c9d2cf       storage-provisioner
	66342a7ece6d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   66886a4d064b7       kube-proxy-4nb6w
	6a3df78435249       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   c2c36a7b6bfa8       coredns-5dd5756b68-qz4fn
	486b5230383fb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   582bb3c4a02a4       etcd-embed-certs-809120
	e7edb497978d8       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   cc828429d9674       kube-scheduler-embed-certs-809120
	446438e29bfad       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   7d75363a0bc42       kube-apiserver-embed-certs-809120
	4fb33f05a9153       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   352e4d00108af       kube-controller-manager-embed-certs-809120
	
	* 
	* ==> coredns [6a3df78435249fab1a0c7505346718f8759c3c99c781a6dc333c1f596e83848d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-809120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-809120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=embed-certs-809120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T23_22_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 23:22:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-809120
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 23:37:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 23:33:33 +0000   Tue, 12 Dec 2023 23:22:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 23:33:33 +0000   Tue, 12 Dec 2023 23:22:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 23:33:33 +0000   Tue, 12 Dec 2023 23:22:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 23:33:33 +0000   Tue, 12 Dec 2023 23:23:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.221
	  Hostname:    embed-certs-809120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c22750f7cb4d4371bfa7e3d7f47269f3
	  System UUID:                c22750f7-cb4d-4371-bfa7-e3d7f47269f3
	  Boot ID:                    57045704-b81b-4b73-a22d-c562c550e68a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qz4fn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-809120                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-embed-certs-809120             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-embed-certs-809120    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-4nb6w                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-809120             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-m6nc6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-809120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-809120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-809120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node embed-certs-809120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node embed-certs-809120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node embed-certs-809120 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m                kubelet          Node embed-certs-809120 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m                kubelet          Node embed-certs-809120 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node embed-certs-809120 event: Registered Node embed-certs-809120 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec12 23:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071652] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.775025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.820104] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147938] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.515272] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.661321] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.114182] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.162862] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.117114] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.253386] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[Dec12 23:18] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[ +19.354754] kauditd_printk_skb: 29 callbacks suppressed
	[Dec12 23:22] systemd-fstab-generator[3526]: Ignoring "noauto" for root device
	[  +9.806829] systemd-fstab-generator[3852]: Ignoring "noauto" for root device
	[Dec12 23:23] kauditd_printk_skb: 2 callbacks suppressed
	[Dec12 23:37] hrtimer: interrupt took 2615525 ns
	
	* 
	* ==> etcd [486b5230383fb80822587f5c32f90adb78a93978df665394e44449e786b117fb] <==
	* {"level":"info","ts":"2023-12-12T23:22:52.211067Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"35ecb74b0d77a53b","local-member-id":"7e2ae951029168ce","added-peer-id":"7e2ae951029168ce","added-peer-peer-urls":["https://192.168.50.221:2380"]}
	{"level":"info","ts":"2023-12-12T23:22:52.48353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:52.483691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:52.483753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce received MsgPreVoteResp from 7e2ae951029168ce at term 1"}
	{"level":"info","ts":"2023-12-12T23:22:52.483783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:52.483811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce received MsgVoteResp from 7e2ae951029168ce at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:52.483838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became leader at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:52.483864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7e2ae951029168ce elected leader 7e2ae951029168ce at term 2"}
	{"level":"info","ts":"2023-12-12T23:22:52.485263Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7e2ae951029168ce","local-member-attributes":"{Name:embed-certs-809120 ClientURLs:[https://192.168.50.221:2379]}","request-path":"/0/members/7e2ae951029168ce/attributes","cluster-id":"35ecb74b0d77a53b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T23:22:52.4855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:22:52.486605Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.221:2379"}
	{"level":"info","ts":"2023-12-12T23:22:52.486711Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:52.486854Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T23:22:52.487928Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"35ecb74b0d77a53b","local-member-id":"7e2ae951029168ce","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:52.488055Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:52.488095Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T23:22:52.491386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T23:22:52.491571Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T23:22:52.491662Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T23:32:52.629377Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2023-12-12T23:32:52.632348Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":724,"took":"2.292107ms","hash":2304633933}
	{"level":"info","ts":"2023-12-12T23:32:52.632589Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2304633933,"revision":724,"compact-revision":-1}
	{"level":"info","ts":"2023-12-12T23:36:38.833762Z","caller":"traceutil/trace.go:171","msg":"trace[606357821] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"118.011557ms","start":"2023-12-12T23:36:38.7157Z","end":"2023-12-12T23:36:38.833712Z","steps":["trace[606357821] 'process raft request'  (duration: 117.613178ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T23:36:38.960765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.586474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T23:36:38.960968Z","caller":"traceutil/trace.go:171","msg":"trace[621373972] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1152; }","duration":"105.87112ms","start":"2023-12-12T23:36:38.855083Z","end":"2023-12-12T23:36:38.960954Z","steps":["trace[621373972] 'range keys from in-memory index tree'  (duration: 105.401404ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:37:48 up 20 min,  0 users,  load average: 0.21, 0.18, 0.18
	Linux embed-certs-809120 5.10.57 #1 SMP Tue Dec 12 18:39:03 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [446438e29bfad116261371d7ecba7a3a37d8f76a09335843990b5a49d2ba490c] <==
	* W1212 23:32:55.236646       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:32:55.236799       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:32:55.236810       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:32:55.236655       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:32:55.236838       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:32:55.238131       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:33:54.115508       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:33:55.237648       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:33:55.237778       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:33:55.237791       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:33:55.238742       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:33:55.238949       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:33:55.239007       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:34:54.115207       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 23:35:54.115641       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1212 23:35:55.239016       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:35:55.239357       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 23:35:55.239528       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 23:35:55.239139       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 23:35:55.239631       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1212 23:35:55.241348       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 23:36:54.115560       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [4fb33f05a91535b917c6bc99c2d11b7045ac56197c0c9ac99f7e6eab482cde8e] <==
	* I1212 23:32:10.794101       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:32:40.337534       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:32:40.802673       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:33:10.345749       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:33:10.812219       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:33:40.352313       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:33:40.823077       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:34:10.359229       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:34:10.833522       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 23:34:11.957267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="226.157µs"
	I1212 23:34:25.953546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="195.61µs"
	E1212 23:34:40.366021       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:34:40.844367       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:35:10.375047       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:35:10.859041       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:35:40.381525       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:35:40.869888       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:36:10.389929       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:36:10.878762       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:36:40.397257       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:36:40.889821       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:37:10.402758       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:37:10.898837       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 23:37:40.410117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1212 23:37:40.910713       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [66342a7ece6d2c177432bced675db2755858b356270b9c2bc42a7deb0c39dd30] <==
	* I1212 23:23:15.719395       1 server_others.go:69] "Using iptables proxy"
	I1212 23:23:15.752079       1 node.go:141] Successfully retrieved node IP: 192.168.50.221
	I1212 23:23:15.878158       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 23:23:15.878236       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 23:23:15.881610       1 server_others.go:152] "Using iptables Proxier"
	I1212 23:23:15.881671       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 23:23:15.881899       1 server.go:846] "Version info" version="v1.28.4"
	I1212 23:23:15.881934       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 23:23:15.884123       1 config.go:188] "Starting service config controller"
	I1212 23:23:15.884177       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 23:23:15.884207       1 config.go:97] "Starting endpoint slice config controller"
	I1212 23:23:15.884211       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 23:23:15.886688       1 config.go:315] "Starting node config controller"
	I1212 23:23:15.886727       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 23:23:15.984492       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 23:23:15.984551       1 shared_informer.go:318] Caches are synced for service config
	I1212 23:23:15.986799       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e7edb497978d8b6d45b12153ac9557d865bd962518f6dec0e4379212641c0c67] <==
	* W1212 23:22:54.316674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:22:54.316682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:22:54.316707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:22:54.316745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:22:55.193996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 23:22:55.194121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 23:22:55.213127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 23:22:55.213248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 23:22:55.240412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 23:22:55.240685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 23:22:55.288805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 23:22:55.288940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1212 23:22:55.302393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 23:22:55.302518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 23:22:55.378856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 23:22:55.378941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 23:22:55.478283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 23:22:55.478527       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 23:22:55.479286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 23:22:55.479352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 23:22:55.522516       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 23:22:55.522687       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 23:22:55.549805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 23:22:55.549855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 23:22:57.987942       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-12 23:17:40 UTC, ends at Tue 2023-12-12 23:37:48 UTC. --
	Dec 12 23:34:57 embed-certs-809120 kubelet[3859]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:34:57 embed-certs-809120 kubelet[3859]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:34:57 embed-certs-809120 kubelet[3859]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:35:00 embed-certs-809120 kubelet[3859]: E1212 23:35:00.935972    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:35:11 embed-certs-809120 kubelet[3859]: E1212 23:35:11.935849    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:35:25 embed-certs-809120 kubelet[3859]: E1212 23:35:25.937130    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:35:36 embed-certs-809120 kubelet[3859]: E1212 23:35:36.935716    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:35:48 embed-certs-809120 kubelet[3859]: E1212 23:35:48.935540    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:35:57 embed-certs-809120 kubelet[3859]: E1212 23:35:57.963856    3859 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:35:57 embed-certs-809120 kubelet[3859]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:35:57 embed-certs-809120 kubelet[3859]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:35:57 embed-certs-809120 kubelet[3859]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:36:02 embed-certs-809120 kubelet[3859]: E1212 23:36:02.937568    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:36:14 embed-certs-809120 kubelet[3859]: E1212 23:36:14.936622    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:36:28 embed-certs-809120 kubelet[3859]: E1212 23:36:28.935619    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:36:43 embed-certs-809120 kubelet[3859]: E1212 23:36:43.935949    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:36:54 embed-certs-809120 kubelet[3859]: E1212 23:36:54.936406    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:36:57 embed-certs-809120 kubelet[3859]: E1212 23:36:57.961202    3859 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 23:36:57 embed-certs-809120 kubelet[3859]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 23:36:57 embed-certs-809120 kubelet[3859]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 23:36:57 embed-certs-809120 kubelet[3859]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 23:37:09 embed-certs-809120 kubelet[3859]: E1212 23:37:09.935749    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:37:22 embed-certs-809120 kubelet[3859]: E1212 23:37:22.936968    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:37:33 embed-certs-809120 kubelet[3859]: E1212 23:37:33.936781    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	Dec 12 23:37:46 embed-certs-809120 kubelet[3859]: E1212 23:37:46.936820    3859 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-m6nc6" podUID="e12a702a-24d8-4b08-9ca3-a1b79f5df5e5"
	
	* 
	* ==> storage-provisioner [c34f627c7cd173455aaa78064c9ce9906e3986bae0accfd7d2a6c190f6c402ed] <==
	* I1212 23:23:15.814633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 23:23:15.825151       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 23:23:15.825228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 23:23:15.836554       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 23:23:15.836754       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-809120_416095db-2a9e-4d5d-ae51-9f8c4bf43e1b!
	I1212 23:23:15.837947       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c92bc176-fa01-4b7d-ab51-dd432abe9c92", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-809120_416095db-2a9e-4d5d-ae51-9f8c4bf43e1b became leader
	I1212 23:23:15.938030       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-809120_416095db-2a9e-4d5d-ae51-9f8c4bf43e1b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-809120 -n embed-certs-809120
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-809120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-m6nc6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-809120 describe pod metrics-server-57f55c9bc5-m6nc6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-809120 describe pod metrics-server-57f55c9bc5-m6nc6: exit status 1 (70.113338ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-m6nc6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-809120 describe pod metrics-server-57f55c9bc5-m6nc6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (327.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (140.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-439645 --alsologtostderr -v=3
E1212 23:37:09.617597   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-439645 --alsologtostderr -v=3: exit status 82 (2m2.462769821s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-439645"  ...
	* Stopping node "newest-cni-439645"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 23:37:08.880710  134184 out.go:296] Setting OutFile to fd 1 ...
	I1212 23:37:08.881062  134184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:37:08.881077  134184 out.go:309] Setting ErrFile to fd 2...
	I1212 23:37:08.881084  134184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 23:37:08.881393  134184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 23:37:08.881734  134184 out.go:303] Setting JSON to false
	I1212 23:37:08.881887  134184 mustload.go:65] Loading cluster: newest-cni-439645
	I1212 23:37:08.882381  134184 config.go:182] Loaded profile config "newest-cni-439645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:37:08.882497  134184 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/newest-cni-439645/config.json ...
	I1212 23:37:08.882717  134184 mustload.go:65] Loading cluster: newest-cni-439645
	I1212 23:37:08.882905  134184 config.go:182] Loaded profile config "newest-cni-439645": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 23:37:08.882949  134184 stop.go:39] StopHost: newest-cni-439645
	I1212 23:37:08.883536  134184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:37:08.883609  134184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:37:08.898924  134184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I1212 23:37:08.899460  134184 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:37:08.900214  134184 main.go:141] libmachine: Using API Version  1
	I1212 23:37:08.900250  134184 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:37:08.900622  134184 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:37:08.902788  134184 out.go:177] * Stopping node "newest-cni-439645"  ...
	I1212 23:37:08.904175  134184 main.go:141] libmachine: Stopping "newest-cni-439645"...
	I1212 23:37:08.904201  134184 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:37:08.906024  134184 main.go:141] libmachine: (newest-cni-439645) Calling .Stop
	I1212 23:37:08.910338  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 0/60
	I1212 23:37:09.911745  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 1/60
	I1212 23:37:10.913046  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 2/60
	I1212 23:37:11.914328  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 3/60
	I1212 23:37:12.915767  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 4/60
	I1212 23:37:13.918140  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 5/60
	I1212 23:37:14.919483  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 6/60
	I1212 23:37:15.921947  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 7/60
	I1212 23:37:16.923391  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 8/60
	I1212 23:37:17.924652  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 9/60
	I1212 23:37:18.926975  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 10/60
	I1212 23:37:19.928470  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 11/60
	I1212 23:37:20.930089  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 12/60
	I1212 23:37:21.931622  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 13/60
	I1212 23:37:22.933780  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 14/60
	I1212 23:37:23.936233  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 15/60
	I1212 23:37:24.937898  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 16/60
	I1212 23:37:26.054971  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 17/60
	I1212 23:37:27.056711  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 18/60
	I1212 23:37:28.058279  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 19/60
	I1212 23:37:29.060822  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 20/60
	I1212 23:37:30.062076  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 21/60
	I1212 23:37:31.063796  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 22/60
	I1212 23:37:32.066215  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 23/60
	I1212 23:37:33.067837  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 24/60
	I1212 23:37:34.069836  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 25/60
	I1212 23:37:35.071345  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 26/60
	I1212 23:37:36.072796  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 27/60
	I1212 23:37:37.075008  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 28/60
	I1212 23:37:38.076458  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 29/60
	I1212 23:37:39.077916  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 30/60
	I1212 23:37:40.079339  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 31/60
	I1212 23:37:41.080855  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 32/60
	I1212 23:37:42.082180  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 33/60
	I1212 23:37:43.084289  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 34/60
	I1212 23:37:44.086161  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 35/60
	I1212 23:37:45.087551  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 36/60
	I1212 23:37:46.088887  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 37/60
	I1212 23:37:47.090381  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 38/60
	I1212 23:37:48.091831  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 39/60
	I1212 23:37:49.094972  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 40/60
	I1212 23:37:50.096526  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 41/60
	I1212 23:37:51.097984  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 42/60
	I1212 23:37:52.099591  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 43/60
	I1212 23:37:53.101153  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 44/60
	I1212 23:37:54.103283  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 45/60
	I1212 23:37:55.104608  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 46/60
	I1212 23:37:56.106090  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 47/60
	I1212 23:37:57.107405  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 48/60
	I1212 23:37:58.108606  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 49/60
	I1212 23:37:59.110784  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 50/60
	I1212 23:38:00.112237  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 51/60
	I1212 23:38:01.113858  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 52/60
	I1212 23:38:02.115130  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 53/60
	I1212 23:38:03.116713  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 54/60
	I1212 23:38:04.118830  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 55/60
	I1212 23:38:05.120285  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 56/60
	I1212 23:38:06.121900  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 57/60
	I1212 23:38:07.123352  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 58/60
	I1212 23:38:08.124886  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 59/60
	I1212 23:38:09.126072  134184 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:38:09.126149  134184 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:38:09.126167  134184 retry.go:31] will retry after 1.496814886s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:38:10.623890  134184 stop.go:39] StopHost: newest-cni-439645
	I1212 23:38:10.624400  134184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 23:38:10.624453  134184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 23:38:10.638454  134184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
	I1212 23:38:10.638925  134184 main.go:141] libmachine: () Calling .GetVersion
	I1212 23:38:10.639500  134184 main.go:141] libmachine: Using API Version  1
	I1212 23:38:10.639527  134184 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 23:38:10.639819  134184 main.go:141] libmachine: () Calling .GetMachineName
	I1212 23:38:10.642276  134184 out.go:177] * Stopping node "newest-cni-439645"  ...
	I1212 23:38:10.644044  134184 main.go:141] libmachine: Stopping "newest-cni-439645"...
	I1212 23:38:10.644067  134184 main.go:141] libmachine: (newest-cni-439645) Calling .GetState
	I1212 23:38:10.645775  134184 main.go:141] libmachine: (newest-cni-439645) Calling .Stop
	I1212 23:38:10.649003  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 0/60
	I1212 23:38:11.650581  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 1/60
	I1212 23:38:12.652100  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 2/60
	I1212 23:38:13.653605  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 3/60
	I1212 23:38:14.655288  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 4/60
	I1212 23:38:15.657162  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 5/60
	I1212 23:38:16.658557  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 6/60
	I1212 23:38:17.660660  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 7/60
	I1212 23:38:18.662166  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 8/60
	I1212 23:38:19.664488  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 9/60
	I1212 23:38:20.666752  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 10/60
	I1212 23:38:21.668722  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 11/60
	I1212 23:38:22.670777  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 12/60
	I1212 23:38:23.672242  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 13/60
	I1212 23:38:24.673687  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 14/60
	I1212 23:38:25.675574  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 15/60
	I1212 23:38:26.677082  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 16/60
	I1212 23:38:27.678768  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 17/60
	I1212 23:38:28.680771  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 18/60
	I1212 23:38:29.682215  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 19/60
	I1212 23:38:30.684175  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 20/60
	I1212 23:38:31.685751  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 21/60
	I1212 23:38:32.687057  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 22/60
	I1212 23:38:33.688512  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 23/60
	I1212 23:38:34.689873  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 24/60
	I1212 23:38:35.691310  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 25/60
	I1212 23:38:36.692992  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 26/60
	I1212 23:38:37.694415  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 27/60
	I1212 23:38:38.695942  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 28/60
	I1212 23:38:39.697473  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 29/60
	I1212 23:38:40.699291  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 30/60
	I1212 23:38:41.700886  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 31/60
	I1212 23:38:42.702229  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 32/60
	I1212 23:38:43.704247  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 33/60
	I1212 23:38:44.705889  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 34/60
	I1212 23:38:46.227272  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 35/60
	I1212 23:38:47.228802  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 36/60
	I1212 23:38:48.230202  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 37/60
	I1212 23:38:49.231674  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 38/60
	I1212 23:38:50.233079  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 39/60
	I1212 23:38:51.235152  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 40/60
	I1212 23:38:52.236664  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 41/60
	I1212 23:38:53.238024  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 42/60
	I1212 23:38:54.239558  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 43/60
	I1212 23:38:55.241040  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 44/60
	I1212 23:38:56.243062  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 45/60
	I1212 23:38:57.244497  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 46/60
	I1212 23:38:58.246047  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 47/60
	I1212 23:38:59.247585  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 48/60
	I1212 23:39:00.248939  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 49/60
	I1212 23:39:01.250500  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 50/60
	I1212 23:39:02.251815  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 51/60
	I1212 23:39:03.253271  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 52/60
	I1212 23:39:04.254646  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 53/60
	I1212 23:39:05.256037  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 54/60
	I1212 23:39:06.257944  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 55/60
	I1212 23:39:07.259368  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 56/60
	I1212 23:39:08.260763  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 57/60
	I1212 23:39:09.262132  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 58/60
	I1212 23:39:10.263613  134184 main.go:141] libmachine: (newest-cni-439645) Waiting for machine to stop 59/60
	I1212 23:39:11.264616  134184 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1212 23:39:11.264669  134184 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 23:39:11.266464  134184 out.go:177] 
	W1212 23:39:11.268066  134184 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 23:39:11.268082  134184 out.go:239] * 
	* 
	W1212 23:39:11.271446  134184 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 23:39:11.272964  134184 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-439645 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439645 -n newest-cni-439645
E1212 23:39:11.381090   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:11.386404   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:11.396686   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:11.416998   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:11.457296   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:11.537746   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:11.698287   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:12.018980   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:12.659210   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:13.940090   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:16.500942   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:17.803715   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:39:21.622012   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439645 -n newest-cni-439645: exit status 3 (18.487956513s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:39:29.763624  135114 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E1212 23:39:29.763648  135114 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-439645" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (140.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439645 -n newest-cni-439645
E1212 23:39:31.863000   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439645 -n newest-cni-439645: exit status 3 (3.16860514s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:39:32.931675  135176 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E1212 23:39:32.931695  135176 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-439645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1212 23:39:36.886476   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:36.891807   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:36.902137   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:36.922500   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:36.962835   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:37.043204   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:37.203729   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:37.524345   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:38.164632   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-439645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153269207s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-439645 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439645 -n newest-cni-439645
E1212 23:39:39.445700   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:42.005978   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439645 -n newest-cni-439645: exit status 3 (3.061668802s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 23:39:42.147610  135245 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host
	E1212 23:39:42.147631  135245 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-439645" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.38s)

                                                
                                    

Test pass (240/307)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.43
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 5.49
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.2/json-events 6.79
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.15
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
26 TestBinaryMirror 0.58
27 TestOffline 131.83
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
32 TestAddons/Setup 143.88
34 TestAddons/parallel/Registry 16.03
36 TestAddons/parallel/InspektorGadget 11.68
37 TestAddons/parallel/MetricsServer 6.71
38 TestAddons/parallel/HelmTiller 12.29
40 TestAddons/parallel/CSI 68.03
41 TestAddons/parallel/Headlamp 15.87
42 TestAddons/parallel/CloudSpanner 5.92
43 TestAddons/parallel/LocalPath 15.59
44 TestAddons/parallel/NvidiaDevicePlugin 5.92
47 TestAddons/serial/GCPAuth/Namespaces 0.14
49 TestCertOptions 96.16
50 TestCertExpiration 269.02
52 TestForceSystemdFlag 74.16
53 TestForceSystemdEnv 58.64
55 TestKVMDriverInstallOrUpdate 1.19
59 TestErrorSpam/setup 45.56
60 TestErrorSpam/start 0.39
61 TestErrorSpam/status 0.8
62 TestErrorSpam/pause 1.58
63 TestErrorSpam/unpause 1.82
64 TestErrorSpam/stop 2.28
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 61.14
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 36.91
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
76 TestFunctional/serial/CacheCmd/cache/add_local 1.09
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 31.71
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.56
87 TestFunctional/serial/LogsFileCmd 1.54
88 TestFunctional/serial/InvalidService 4.13
90 TestFunctional/parallel/ConfigCmd 0.45
91 TestFunctional/parallel/DashboardCmd 16.04
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 1.24
98 TestFunctional/parallel/ServiceCmdConnect 9.8
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 48.89
102 TestFunctional/parallel/SSHCmd 0.49
103 TestFunctional/parallel/CpCmd 1.59
104 TestFunctional/parallel/MySQL 29.58
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.68
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
114 TestFunctional/parallel/License 0.2
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 0.88
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
120 TestFunctional/parallel/ImageCommands/ImageListYaml 1.12
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.94
122 TestFunctional/parallel/ImageCommands/Setup 1.07
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.5
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.35
127 TestFunctional/parallel/ServiceCmd/DeployApp 27.39
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.12
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.03
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.51
140 TestFunctional/parallel/ImageCommands/ImageRemove 1.16
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.55
142 TestFunctional/parallel/ServiceCmd/List 0.35
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.7
146 TestFunctional/parallel/ServiceCmd/Format 0.38
147 TestFunctional/parallel/ServiceCmd/URL 0.54
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
149 TestFunctional/parallel/ProfileCmd/profile_list 0.4
150 TestFunctional/parallel/MountCmd/any-port 7.68
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
152 TestFunctional/parallel/MountCmd/specific-port 2.12
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
154 TestFunctional/delete_addon-resizer_images 0.07
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestIngressAddonLegacy/StartLegacyK8sCluster 76.89
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.53
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
167 TestJSONOutput/start/Command 101.22
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.71
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.69
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 7.11
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.22
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 98.35
199 TestMountStart/serial/StartWithMountFirst 28.34
200 TestMountStart/serial/VerifyMountFirst 0.41
201 TestMountStart/serial/StartWithMountSecond 27.98
202 TestMountStart/serial/VerifyMountSecond 0.4
203 TestMountStart/serial/DeleteFirst 0.89
204 TestMountStart/serial/VerifyMountPostDelete 0.42
205 TestMountStart/serial/Stop 1.22
206 TestMountStart/serial/RestartStopped 24.93
207 TestMountStart/serial/VerifyMountPostStop 0.41
210 TestMultiNode/serial/FreshStart2Nodes 109.26
211 TestMultiNode/serial/DeployApp2Nodes 4.87
213 TestMultiNode/serial/AddNode 44.24
214 TestMultiNode/serial/MultiNodeLabels 0.06
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.87
217 TestMultiNode/serial/StopNode 3.03
218 TestMultiNode/serial/StartAfterStop 30
220 TestMultiNode/serial/DeleteNode 1.82
222 TestMultiNode/serial/RestartMultiNode 440.61
223 TestMultiNode/serial/ValidateNameConflict 53.69
230 TestScheduledStopUnix 121.91
236 TestKubernetesUpgrade 162.04
238 TestStoppedBinaryUpgrade/Setup 0.56
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
241 TestNoKubernetes/serial/StartWithK8s 103.06
243 TestNoKubernetes/serial/StartWithStopK8s 7.57
244 TestNoKubernetes/serial/Start 29.82
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
246 TestNoKubernetes/serial/ProfileList 1.91
247 TestNoKubernetes/serial/Stop 1.82
248 TestNoKubernetes/serial/StartNoArgs 47.54
256 TestNetworkPlugins/group/false 4.04
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
269 TestPause/serial/Start 109.87
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.46
271 TestNetworkPlugins/group/auto/Start 129.57
272 TestPause/serial/SecondStartNoReconfiguration 38.36
273 TestPause/serial/Pause 0.79
274 TestPause/serial/VerifyStatus 0.28
275 TestPause/serial/Unpause 0.7
276 TestPause/serial/PauseAgain 1.03
277 TestPause/serial/DeletePaused 1.06
278 TestPause/serial/VerifyDeletedResources 14.89
279 TestNetworkPlugins/group/kindnet/Start 70.37
280 TestNetworkPlugins/group/calico/Start 117.49
281 TestNetworkPlugins/group/auto/KubeletFlags 0.26
282 TestNetworkPlugins/group/auto/NetCatPod 14.41
283 TestNetworkPlugins/group/custom-flannel/Start 120.35
284 TestNetworkPlugins/group/auto/DNS 0.19
285 TestNetworkPlugins/group/auto/Localhost 0.16
286 TestNetworkPlugins/group/auto/HairPin 0.17
287 TestNetworkPlugins/group/enable-default-cni/Start 138.11
288 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
289 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
290 TestNetworkPlugins/group/kindnet/NetCatPod 16.34
291 TestNetworkPlugins/group/kindnet/DNS 0.25
292 TestNetworkPlugins/group/kindnet/Localhost 0.19
293 TestNetworkPlugins/group/kindnet/HairPin 0.23
294 TestNetworkPlugins/group/flannel/Start 84.92
295 TestNetworkPlugins/group/calico/ControllerPod 5.04
296 TestNetworkPlugins/group/calico/KubeletFlags 0.27
297 TestNetworkPlugins/group/calico/NetCatPod 12.47
298 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
299 TestNetworkPlugins/group/calico/DNS 0.22
300 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.44
301 TestNetworkPlugins/group/calico/Localhost 0.19
302 TestNetworkPlugins/group/calico/HairPin 0.23
303 TestNetworkPlugins/group/custom-flannel/DNS 0.23
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
306 TestNetworkPlugins/group/bridge/Start 103.45
308 TestStartStop/group/old-k8s-version/serial/FirstStart 146.15
309 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
310 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.39
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
314 TestNetworkPlugins/group/flannel/ControllerPod 5.03
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
316 TestNetworkPlugins/group/flannel/NetCatPod 12.4
318 TestStartStop/group/no-preload/serial/FirstStart 133.28
319 TestNetworkPlugins/group/flannel/DNS 0.22
320 TestNetworkPlugins/group/flannel/Localhost 0.23
321 TestNetworkPlugins/group/flannel/HairPin 0.22
323 TestStartStop/group/embed-certs/serial/FirstStart 74.33
324 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
325 TestNetworkPlugins/group/bridge/NetCatPod 11.6
326 TestNetworkPlugins/group/bridge/DNS 0.21
327 TestNetworkPlugins/group/bridge/Localhost 0.19
328 TestNetworkPlugins/group/bridge/HairPin 0.2
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 64.91
331 TestStartStop/group/embed-certs/serial/DeployApp 8.44
332 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
334 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
335 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.96
337 TestStartStop/group/no-preload/serial/DeployApp 9.94
338 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
345 TestStartStop/group/embed-certs/serial/SecondStart 695.36
346 TestStartStop/group/old-k8s-version/serial/SecondStart 364.66
349 TestStartStop/group/no-preload/serial/SecondStart 651.11
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 589.18
359 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
360 TestStartStop/group/old-k8s-version/serial/Pause 2.84
362 TestStartStop/group/newest-cni/serial/FirstStart 59.12
363 TestStartStop/group/newest-cni/serial/DeployApp 0
364 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.55
367 TestStartStop/group/newest-cni/serial/SecondStart 331.02
368 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
370 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
371 TestStartStop/group/newest-cni/serial/Pause 2.6
x
+
TestDownloadOnly/v1.16.0/json-events (7.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-526453 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-526453 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.428233631s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-526453
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-526453: exit status 85 (77.708423ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-526453        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:02:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:02:40.298039   83837 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:02:40.298304   83837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:40.298322   83837 out.go:309] Setting ErrFile to fd 2...
	I1212 22:02:40.298327   83837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:40.298526   83837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	W1212 22:02:40.298666   83837 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17761-76611/.minikube/config/config.json: open /home/jenkins/minikube-integration/17761-76611/.minikube/config/config.json: no such file or directory
	I1212 22:02:40.299363   83837 out.go:303] Setting JSON to true
	I1212 22:02:40.300241   83837 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9914,"bootTime":1702408646,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:02:40.300308   83837 start.go:138] virtualization: kvm guest
	I1212 22:02:40.303039   83837 out.go:97] [download-only-526453] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:02:40.304667   83837 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:02:40.303211   83837 notify.go:220] Checking for updates...
	W1212 22:02:40.303216   83837 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 22:02:40.307776   83837 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:02:40.309295   83837 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:02:40.310818   83837 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:02:40.312204   83837 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:02:40.314743   83837 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:02:40.315036   83837 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:02:40.350405   83837 out.go:97] Using the kvm2 driver based on user configuration
	I1212 22:02:40.350441   83837 start.go:298] selected driver: kvm2
	I1212 22:02:40.350450   83837 start.go:902] validating driver "kvm2" against <nil>
	I1212 22:02:40.350819   83837 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:02:40.350926   83837 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:02:40.368090   83837 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:02:40.368156   83837 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:02:40.368671   83837 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1212 22:02:40.368839   83837 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 22:02:40.368921   83837 cni.go:84] Creating CNI manager for ""
	I1212 22:02:40.368938   83837 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:02:40.368951   83837 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 22:02:40.368960   83837 start_flags.go:323] config:
	{Name:download-only-526453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-526453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:40.369225   83837 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:02:40.371193   83837 out.go:97] Downloading VM boot image ...
	I1212 22:02:40.371271   83837 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/iso/amd64/minikube-v1.32.1-1702394653-17761-amd64.iso
	I1212 22:02:43.215489   83837 out.go:97] Starting control plane node download-only-526453 in cluster download-only-526453
	I1212 22:02:43.215553   83837 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 22:02:43.240132   83837 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:43.240169   83837 cache.go:56] Caching tarball of preloaded images
	I1212 22:02:43.240320   83837 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 22:02:43.242278   83837 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 22:02:43.242291   83837 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:43.269559   83837 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-526453"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-526453 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-526453 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.485473323s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-526453
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-526453: exit status 85 (75.071173ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-526453        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-526453        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:02:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:02:47.805196   83889 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:02:47.805349   83889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:47.805360   83889 out.go:309] Setting ErrFile to fd 2...
	I1212 22:02:47.805365   83889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:47.805568   83889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	W1212 22:02:47.805722   83889 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17761-76611/.minikube/config/config.json: open /home/jenkins/minikube-integration/17761-76611/.minikube/config/config.json: no such file or directory
	I1212 22:02:47.806203   83889 out.go:303] Setting JSON to true
	I1212 22:02:47.807098   83889 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9922,"bootTime":1702408646,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:02:47.807165   83889 start.go:138] virtualization: kvm guest
	I1212 22:02:47.809448   83889 out.go:97] [download-only-526453] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:02:47.811098   83889 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:02:47.809682   83889 notify.go:220] Checking for updates...
	I1212 22:02:47.814334   83889 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:02:47.816024   83889 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:02:47.817520   83889 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:02:47.819095   83889 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:02:47.821862   83889 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:02:47.822358   83889 config.go:182] Loaded profile config "download-only-526453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1212 22:02:47.822407   83889 start.go:810] api.Load failed for download-only-526453: filestore "download-only-526453": Docker machine "download-only-526453" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:02:47.822490   83889 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 22:02:47.822521   83889 start.go:810] api.Load failed for download-only-526453: filestore "download-only-526453": Docker machine "download-only-526453" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:02:47.855723   83889 out.go:97] Using the kvm2 driver based on existing profile
	I1212 22:02:47.855751   83889 start.go:298] selected driver: kvm2
	I1212 22:02:47.855762   83889 start.go:902] validating driver "kvm2" against &{Name:download-only-526453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-526453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:47.856161   83889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:02:47.856236   83889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:02:47.871327   83889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:02:47.872053   83889 cni.go:84] Creating CNI manager for ""
	I1212 22:02:47.872072   83889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:02:47.872086   83889 start_flags.go:323] config:
	{Name:download-only-526453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-526453 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:47.872247   83889 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:02:47.874136   83889 out.go:97] Starting control plane node download-only-526453 in cluster download-only-526453
	I1212 22:02:47.874148   83889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:02:47.902016   83889 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:47.902045   83889 cache.go:56] Caching tarball of preloaded images
	I1212 22:02:47.902236   83889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:02:47.904376   83889 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 22:02:47.904413   83889 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:47.930744   83889 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:51.717692   83889 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:51.717787   83889 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:52.622471   83889 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:02:52.622638   83889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/download-only-526453/config.json ...
	I1212 22:02:52.622865   83889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:02:52.623083   83889 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-526453"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (6.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-526453 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-526453 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.788579732s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (6.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-526453
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-526453: exit status 85 (80.534404ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-526453           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-526453           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-526453 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-526453           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:02:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:02:53.367921   83934 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:02:53.368101   83934 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:53.368111   83934 out.go:309] Setting ErrFile to fd 2...
	I1212 22:02:53.368116   83934 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:53.368300   83934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	W1212 22:02:53.368433   83934 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17761-76611/.minikube/config/config.json: open /home/jenkins/minikube-integration/17761-76611/.minikube/config/config.json: no such file or directory
	I1212 22:02:53.368896   83934 out.go:303] Setting JSON to true
	I1212 22:02:53.369724   83934 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9927,"bootTime":1702408646,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:02:53.369783   83934 start.go:138] virtualization: kvm guest
	I1212 22:02:53.372399   83934 out.go:97] [download-only-526453] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:02:53.374153   83934 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:02:53.372570   83934 notify.go:220] Checking for updates...
	I1212 22:02:53.376934   83934 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:02:53.378976   83934 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:02:53.380677   83934 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:02:53.382239   83934 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:02:53.385453   83934 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:02:53.385950   83934 config.go:182] Loaded profile config "download-only-526453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 22:02:53.386027   83934 start.go:810] api.Load failed for download-only-526453: filestore "download-only-526453": Docker machine "download-only-526453" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:02:53.386129   83934 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 22:02:53.386190   83934 start.go:810] api.Load failed for download-only-526453: filestore "download-only-526453": Docker machine "download-only-526453" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:02:53.420553   83934 out.go:97] Using the kvm2 driver based on existing profile
	I1212 22:02:53.420596   83934 start.go:298] selected driver: kvm2
	I1212 22:02:53.420605   83934 start.go:902] validating driver "kvm2" against &{Name:download-only-526453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-526453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:53.421055   83934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:02:53.421132   83934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17761-76611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 22:02:53.435766   83934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1212 22:02:53.436487   83934 cni.go:84] Creating CNI manager for ""
	I1212 22:02:53.436504   83934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 22:02:53.436517   83934 start_flags.go:323] config:
	{Name:download-only-526453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-526453 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:53.436669   83934 iso.go:125] acquiring lock: {Name:mkf96250a0474180453751c446ab11a6b46f047e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:02:53.438416   83934 out.go:97] Starting control plane node download-only-526453 in cluster download-only-526453
	I1212 22:02:53.438428   83934 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 22:02:53.461048   83934 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:53.461079   83934 cache.go:56] Caching tarball of preloaded images
	I1212 22:02:53.461210   83934 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 22:02:53.463214   83934 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 22:02:53.463261   83934 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:53.491187   83934 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:4677ed63f210d912abc47b8c2f7401f7 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:57.237163   83934 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:57.237276   83934 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17761-76611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:58.028162   83934 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1212 22:02:58.028303   83934 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/download-only-526453/config.json ...
	I1212 22:02:58.028504   83934 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 22:02:58.028697   83934 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17761-76611/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-526453"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-526453
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-207180 --alsologtostderr --binary-mirror http://127.0.0.1:37499 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-207180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-207180
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (131.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-797518 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-797518 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m10.785120958s)
helpers_test.go:175: Cleaning up "offline-crio-797518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-797518
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-797518: (1.042494567s)
--- PASS: TestOffline (131.83s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-361656
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-361656: exit status 85 (77.242058ms)

                                                
                                                
-- stdout --
	* Profile "addons-361656" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-361656"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-361656
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-361656: exit status 85 (77.973824ms)

                                                
                                                
-- stdout --
	* Profile "addons-361656" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-361656"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (143.88s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-361656 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-361656 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.875062876s)
--- PASS: TestAddons/Setup (143.88s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 31.664683ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-t5r8q" [b3f4351f-6afa-4e2b-9e8e-902b9da6d859] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01682749s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vmks8" [6b6af208-0ccf-4504-8c9c-7a50353cd4bb] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016149807s
addons_test.go:339: (dbg) Run:  kubectl --context addons-361656 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-361656 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-361656 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.058991775s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 ip
2023/12/12 22:05:40 [DEBUG] GET http://192.168.39.86:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-znnlk" [15c50a50-84de-4320-a2d1-a2df4f012c3e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.042599818s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-361656
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-361656: (6.640585129s)
--- PASS: TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 5.20409ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-xcc44" [1965630d-64fd-4589-8013-157e45b51da6] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.020397313s
addons_test.go:414: (dbg) Run:  kubectl --context addons-361656 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-361656 addons disable metrics-server --alsologtostderr -v=1: (1.569819458s)
--- PASS: TestAddons/parallel/MetricsServer (6.71s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 30.844551ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-7rrmd" [d8666a3c-6977-4eea-ad49-1d5235697f29] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.020249653s
addons_test.go:472: (dbg) Run:  kubectl --context addons-361656 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-361656 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.350252297s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 8.387431ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-361656 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-361656 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [17e1c454-b795-4d8d-ab01-b4e906d65d8d] Pending
helpers_test.go:344: "task-pv-pod" [17e1c454-b795-4d8d-ab01-b4e906d65d8d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [17e1c454-b795-4d8d-ab01-b4e906d65d8d] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.02041252s
addons_test.go:583: (dbg) Run:  kubectl --context addons-361656 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-361656 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-361656 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-361656 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-361656 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-361656 delete pod task-pv-pod: (1.205406427s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-361656 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-361656 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-361656 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a5e84220-c850-4d52-92e1-c0fed23173bc] Pending
helpers_test.go:344: "task-pv-pod-restore" [a5e84220-c850-4d52-92e1-c0fed23173bc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a5e84220-c850-4d52-92e1-c0fed23173bc] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.020788762s
addons_test.go:625: (dbg) Run:  kubectl --context addons-361656 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-361656 delete pod task-pv-pod-restore: (1.378346223s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-361656 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-361656 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-361656 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.87881352s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-361656 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-361656 --alsologtostderr -v=1: (1.833374529s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-wvjmv" [80a749a9-3e48-4683-a511-fa220661bdc4] Pending
helpers_test.go:344: "headlamp-777fd4b855-wvjmv" [80a749a9-3e48-4683-a511-fa220661bdc4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-wvjmv" [80a749a9-3e48-4683-a511-fa220661bdc4] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.036119523s
--- PASS: TestAddons/parallel/Headlamp (15.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-w5rn5" [592f8910-b4f7-4545-a1b9-76883a4c6a84] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012880246s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-361656
--- PASS: TestAddons/parallel/CloudSpanner (5.92s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-361656 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-361656 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8c0c2a6c-1adc-4ad5-b5c7-00756a129aae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8c0c2a6c-1adc-4ad5-b5c7-00756a129aae] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8c0c2a6c-1adc-4ad5-b5c7-00756a129aae] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.011383137s
addons_test.go:890: (dbg) Run:  kubectl --context addons-361656 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 ssh "cat /opt/local-path-provisioner/pvc-274c2300-c02f-4998-a3f1-15f0e2208ef9_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-361656 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-361656 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-361656 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hp8gn" [66283d05-a203-4f57-9ab7-6e01fd05f9de] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.014877746s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-361656
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-361656 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-361656 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestCertOptions (96.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-082418 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1212 23:00:25.172217   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-082418 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m34.572897023s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-082418 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-082418 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-082418 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-082418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-082418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-082418: (1.067645206s)
--- PASS: TestCertOptions (96.16s)

                                                
                                    
x
+
TestCertExpiration (269.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-409818 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-409818 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (59.130505749s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-409818 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-409818 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (29.039795984s)
helpers_test.go:175: Cleaning up "cert-expiration-409818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-409818
--- PASS: TestCertExpiration (269.02s)

                                                
                                    
x
+
TestForceSystemdFlag (74.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-482424 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-482424 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.88746324s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-482424 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-482424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-482424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-482424: (1.051869156s)
--- PASS: TestForceSystemdFlag (74.16s)

                                                
                                    
x
+
TestForceSystemdEnv (58.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-677496 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-677496 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.586430495s)
helpers_test.go:175: Cleaning up "force-systemd-env-677496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-677496
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-677496: (1.053091468s)
--- PASS: TestForceSystemdEnv (58.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.19s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.19s)

                                                
                                    
x
+
TestErrorSpam/setup (45.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-577668 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-577668 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-577668 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-577668 --driver=kvm2  --container-runtime=crio: (45.560846663s)
--- PASS: TestErrorSpam/setup (45.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (2.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 stop: (2.107258827s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-577668 --log_dir /tmp/nospam-577668 stop
--- PASS: TestErrorSpam/stop (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17761-76611/.minikube/files/etc/test/nested/copy/83825/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136031 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-136031 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.136189033s)
--- PASS: TestFunctional/serial/StartWithProxy (61.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136031 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-136031 --alsologtostderr -v=8: (36.912001046s)
functional_test.go:659: soft start took 36.912655035s for "functional-136031" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-136031 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 cache add registry.k8s.io/pause:3.1: (1.044908704s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 cache add registry.k8s.io/pause:3.3: (1.132359776s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 cache add registry.k8s.io/pause:latest: (1.183722193s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-136031 /tmp/TestFunctionalserialCacheCmdcacheadd_local3857261885/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cache add minikube-local-cache-test:functional-136031
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cache delete minikube-local-cache-test:functional-136031
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-136031
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (234.191965ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 kubectl -- --context functional-136031 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-136031 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136031 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-136031 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.705347882s)
functional_test.go:757: restart took 31.705529412s for "functional-136031" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-136031 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 logs: (1.56443325s)
--- PASS: TestFunctional/serial/LogsCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 logs --file /tmp/TestFunctionalserialLogsFileCmd3189987659/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 logs --file /tmp/TestFunctionalserialLogsFileCmd3189987659/001/logs.txt: (1.540553449s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-136031 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-136031
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-136031: exit status 115 (306.305217ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.133:31289 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-136031 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 config get cpus: exit status 14 (76.026179ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 config get cpus: exit status 14 (63.252051ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-136031 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-136031 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 91207: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136031 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-136031 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.832127ms)

                                                
                                                
-- stdout --
	* [functional-136031] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:14:49.420526   90988 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:14:49.420721   90988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:14:49.420734   90988 out.go:309] Setting ErrFile to fd 2...
	I1212 22:14:49.420742   90988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:14:49.420940   90988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 22:14:49.421520   90988 out.go:303] Setting JSON to false
	I1212 22:14:49.422536   90988 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10643,"bootTime":1702408646,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:14:49.422604   90988 start.go:138] virtualization: kvm guest
	I1212 22:14:49.424988   90988 out.go:177] * [functional-136031] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:14:49.427047   90988 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:14:49.428506   90988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:14:49.427113   90988 notify.go:220] Checking for updates...
	I1212 22:14:49.431089   90988 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:14:49.432389   90988 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:14:49.433817   90988 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:14:49.435207   90988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:14:49.436852   90988 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:14:49.437244   90988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:14:49.437303   90988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:14:49.451941   90988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43325
	I1212 22:14:49.452352   90988 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:14:49.452898   90988 main.go:141] libmachine: Using API Version  1
	I1212 22:14:49.452920   90988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:14:49.453319   90988 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:14:49.453558   90988 main.go:141] libmachine: (functional-136031) Calling .DriverName
	I1212 22:14:49.453789   90988 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:14:49.454100   90988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:14:49.454143   90988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:14:49.468995   90988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35509
	I1212 22:14:49.469345   90988 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:14:49.469781   90988 main.go:141] libmachine: Using API Version  1
	I1212 22:14:49.469802   90988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:14:49.470095   90988 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:14:49.470271   90988 main.go:141] libmachine: (functional-136031) Calling .DriverName
	I1212 22:14:49.502477   90988 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 22:14:49.503735   90988 start.go:298] selected driver: kvm2
	I1212 22:14:49.503747   90988 start.go:902] validating driver "kvm2" against &{Name:functional-136031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-136031 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.133 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:14:49.503908   90988 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:14:49.505779   90988 out.go:177] 
	W1212 22:14:49.506951   90988 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 22:14:49.508227   90988 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136031 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-136031 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-136031 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (165.141961ms)

                                                
                                                
-- stdout --
	* [functional-136031] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:14:49.731801   91069 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:14:49.731951   91069 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:14:49.731962   91069 out.go:309] Setting ErrFile to fd 2...
	I1212 22:14:49.731967   91069 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:14:49.732232   91069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 22:14:49.732806   91069 out.go:303] Setting JSON to false
	I1212 22:14:49.733754   91069 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10644,"bootTime":1702408646,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:14:49.733839   91069 start.go:138] virtualization: kvm guest
	I1212 22:14:49.735946   91069 out.go:177] * [functional-136031] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1212 22:14:49.737668   91069 notify.go:220] Checking for updates...
	I1212 22:14:49.737688   91069 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:14:49.739522   91069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:14:49.741507   91069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:14:49.743077   91069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:14:49.744318   91069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:14:49.745446   91069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:14:49.747280   91069 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:14:49.747915   91069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:14:49.747974   91069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:14:49.762616   91069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I1212 22:14:49.763023   91069 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:14:49.763663   91069 main.go:141] libmachine: Using API Version  1
	I1212 22:14:49.763685   91069 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:14:49.764060   91069 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:14:49.764255   91069 main.go:141] libmachine: (functional-136031) Calling .DriverName
	I1212 22:14:49.764525   91069 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:14:49.764827   91069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:14:49.764865   91069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:14:49.780376   91069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
	I1212 22:14:49.780805   91069 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:14:49.781298   91069 main.go:141] libmachine: Using API Version  1
	I1212 22:14:49.781322   91069 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:14:49.781661   91069 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:14:49.781842   91069 main.go:141] libmachine: (functional-136031) Calling .DriverName
	I1212 22:14:49.819375   91069 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1212 22:14:49.820803   91069 start.go:298] selected driver: kvm2
	I1212 22:14:49.820825   91069 start.go:902] validating driver "kvm2" against &{Name:functional-136031 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17761/minikube-v1.32.1-1702394653-17761-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-136031 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.133 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:14:49.820922   91069 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:14:49.823075   91069 out.go:177] 
	W1212 22:14:49.824252   91069 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 22:14:49.825511   91069 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-136031 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-136031 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-dh4tr" [b09366a3-e53a-494e-8339-56050b8b2264] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-dh4tr" [b09366a3-e53a-494e-8339-56050b8b2264] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.053524407s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.133:32040
functional_test.go:1674: http://192.168.50.133:32040: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-dh4tr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.133:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.133:32040
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [893de536-04e0-47d3-ab0c-90ab817030ea] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.022067482s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-136031 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-136031 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-136031 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-136031 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-136031 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [810efc91-eced-4061-81ce-f4c06af92a5b] Pending
helpers_test.go:344: "sp-pod" [810efc91-eced-4061-81ce-f4c06af92a5b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [810efc91-eced-4061-81ce-f4c06af92a5b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.040908517s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-136031 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-136031 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-136031 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dc9d9df3-6e4b-44d9-976c-c94fa5179377] Pending
helpers_test.go:344: "sp-pod" [dc9d9df3-6e4b-44d9-976c-c94fa5179377] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dc9d9df3-6e4b-44d9-976c-c94fa5179377] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.040409457s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-136031 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh -n functional-136031 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cp functional-136031:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1725515579/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh -n functional-136031 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh -n functional-136031 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-136031 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-5pv84" [55af03a3-c7d6-4846-98b5-0ca1d79130ab] Pending
helpers_test.go:344: "mysql-859648c796-5pv84" [55af03a3-c7d6-4846-98b5-0ca1d79130ab] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5pv84" [55af03a3-c7d6-4846-98b5-0ca1d79130ab] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.071677479s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-136031 exec mysql-859648c796-5pv84 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-136031 exec mysql-859648c796-5pv84 -- mysql -ppassword -e "show databases;": exit status 1 (443.330205ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-136031 exec mysql-859648c796-5pv84 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-136031 exec mysql-859648c796-5pv84 -- mysql -ppassword -e "show databases;": exit status 1 (650.405172ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-136031 exec mysql-859648c796-5pv84 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-136031 exec mysql-859648c796-5pv84 -- mysql -ppassword -e "show databases;": exit status 1 (625.998593ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-136031 exec mysql-859648c796-5pv84 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/83825/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo cat /etc/test/nested/copy/83825/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/83825.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo cat /etc/ssl/certs/83825.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/83825.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo cat /usr/share/ca-certificates/83825.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/838252.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo cat /etc/ssl/certs/838252.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/838252.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo cat /usr/share/ca-certificates/838252.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-136031 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 ssh "sudo systemctl is-active docker": exit status 1 (242.025267ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 ssh "sudo systemctl is-active containerd": exit status 1 (232.155291ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136031 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| localhost/minikube-local-cache-test     | functional-136031  | 222d3122eaa0d | 3.34kB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-136031  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136031 image ls --format table --alsologtostderr:
I1212 22:15:01.818797   91969 out.go:296] Setting OutFile to fd 1 ...
I1212 22:15:01.819007   91969 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:15:01.819022   91969 out.go:309] Setting ErrFile to fd 2...
I1212 22:15:01.819029   91969 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:15:01.819363   91969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
I1212 22:15:01.820235   91969 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:15:01.820412   91969 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:15:01.821029   91969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:15:01.821090   91969 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:15:01.835912   91969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35741
I1212 22:15:01.836501   91969 main.go:141] libmachine: () Calling .GetVersion
I1212 22:15:01.837224   91969 main.go:141] libmachine: Using API Version  1
I1212 22:15:01.837253   91969 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:15:01.837643   91969 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:15:01.837949   91969 main.go:141] libmachine: (functional-136031) Calling .GetState
I1212 22:15:01.840208   91969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:15:01.840264   91969 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:15:01.855418   91969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
I1212 22:15:01.855906   91969 main.go:141] libmachine: () Calling .GetVersion
I1212 22:15:01.856507   91969 main.go:141] libmachine: Using API Version  1
I1212 22:15:01.856549   91969 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:15:01.856907   91969 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:15:01.857128   91969 main.go:141] libmachine: (functional-136031) Calling .DriverName
I1212 22:15:01.857377   91969 ssh_runner.go:195] Run: systemctl --version
I1212 22:15:01.857401   91969 main.go:141] libmachine: (functional-136031) Calling .GetSSHHostname
I1212 22:15:01.860296   91969 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:15:01.860696   91969 main.go:141] libmachine: (functional-136031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:2e:47", ip: ""} in network mk-functional-136031: {Iface:virbr1 ExpiryTime:2023-12-12 23:12:08 +0000 UTC Type:0 Mac:52:54:00:da:2e:47 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:functional-136031 Clientid:01:52:54:00:da:2e:47}
I1212 22:15:01.860738   91969 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined IP address 192.168.50.133 and MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:15:01.860902   91969 main.go:141] libmachine: (functional-136031) Calling .GetSSHPort
I1212 22:15:01.861083   91969 main.go:141] libmachine: (functional-136031) Calling .GetSSHKeyPath
I1212 22:15:01.861244   91969 main.go:141] libmachine: (functional-136031) Calling .GetSSHUsername
I1212 22:15:01.861407   91969 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/functional-136031/id_rsa Username:docker}
I1212 22:15:01.973345   91969 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 22:15:02.047822   91969 main.go:141] libmachine: Making call to close driver server
I1212 22:15:02.047840   91969 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:02.048156   91969 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:02.048182   91969 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 22:15:02.048216   91969 main.go:141] libmachine: Making call to close driver server
I1212 22:15:02.048230   91969 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:02.048569   91969 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:02.048609   91969 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 22:15:02.048600   91969 main.go:141] libmachine: (functional-136031) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136031 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d90
0bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/
dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/goog
le-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-136031"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b
78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"222d3122eaa0dd75c3d50528cccd194c719680ad6b7194f37d9fff4bb4792c05","repoDigests":["localhost/minikube-local-cache-test@sha256:85180a8f0be3cacaf7b3ffec35b9b4ebc8e5351121ac0f777b4894efd886ddc8"],"repoTags":["localhost/minikube-local-cache-test:functional-136031"],"size":"3341"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-api
server@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube
-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136031 image ls --format json --alsologtostderr:
I1212 22:15:01.463927   91933 out.go:296] Setting OutFile to fd 1 ...
I1212 22:15:01.464252   91933 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:15:01.464264   91933 out.go:309] Setting ErrFile to fd 2...
I1212 22:15:01.464269   91933 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:15:01.464524   91933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
I1212 22:15:01.465415   91933 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:15:01.465569   91933 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:15:01.466102   91933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:15:01.466182   91933 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:15:01.483772   91933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
I1212 22:15:01.484233   91933 main.go:141] libmachine: () Calling .GetVersion
I1212 22:15:01.485033   91933 main.go:141] libmachine: Using API Version  1
I1212 22:15:01.485062   91933 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:15:01.485402   91933 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:15:01.485610   91933 main.go:141] libmachine: (functional-136031) Calling .GetState
I1212 22:15:01.487796   91933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:15:01.487844   91933 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:15:01.502344   91933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33871
I1212 22:15:01.502744   91933 main.go:141] libmachine: () Calling .GetVersion
I1212 22:15:01.503183   91933 main.go:141] libmachine: Using API Version  1
I1212 22:15:01.503207   91933 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:15:01.503612   91933 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:15:01.503833   91933 main.go:141] libmachine: (functional-136031) Calling .DriverName
I1212 22:15:01.504088   91933 ssh_runner.go:195] Run: systemctl --version
I1212 22:15:01.504114   91933 main.go:141] libmachine: (functional-136031) Calling .GetSSHHostname
I1212 22:15:01.507141   91933 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:15:01.507572   91933 main.go:141] libmachine: (functional-136031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:2e:47", ip: ""} in network mk-functional-136031: {Iface:virbr1 ExpiryTime:2023-12-12 23:12:08 +0000 UTC Type:0 Mac:52:54:00:da:2e:47 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:functional-136031 Clientid:01:52:54:00:da:2e:47}
I1212 22:15:01.507609   91933 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined IP address 192.168.50.133 and MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:15:01.507681   91933 main.go:141] libmachine: (functional-136031) Calling .GetSSHPort
I1212 22:15:01.507836   91933 main.go:141] libmachine: (functional-136031) Calling .GetSSHKeyPath
I1212 22:15:01.507979   91933 main.go:141] libmachine: (functional-136031) Calling .GetSSHUsername
I1212 22:15:01.508142   91933 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/functional-136031/id_rsa Username:docker}
I1212 22:15:01.639551   91933 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 22:15:01.748927   91933 main.go:141] libmachine: Making call to close driver server
I1212 22:15:01.748942   91933 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:01.749298   91933 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:01.749354   91933 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 22:15:01.749355   91933 main.go:141] libmachine: (functional-136031) DBG | Closing plugin on server side
I1212 22:15:01.749377   91933 main.go:141] libmachine: Making call to close driver server
I1212 22:15:01.749393   91933 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:01.749668   91933 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:01.749687   91933 main.go:141] libmachine: (functional-136031) DBG | Closing plugin on server side
I1212 22:15:01.749693   91933 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image ls --format yaml --alsologtostderr: (1.115114121s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136031 image ls --format yaml --alsologtostderr:
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 222d3122eaa0dd75c3d50528cccd194c719680ad6b7194f37d9fff4bb4792c05
repoDigests:
- localhost/minikube-local-cache-test@sha256:85180a8f0be3cacaf7b3ffec35b9b4ebc8e5351121ac0f777b4894efd886ddc8
repoTags:
- localhost/minikube-local-cache-test:functional-136031
size: "3341"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-136031
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136031 image ls --format yaml --alsologtostderr:
I1212 22:15:00.337659   91869 out.go:296] Setting OutFile to fd 1 ...
I1212 22:15:00.337944   91869 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:15:00.337955   91869 out.go:309] Setting ErrFile to fd 2...
I1212 22:15:00.337962   91869 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:15:00.338171   91869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
I1212 22:15:00.338749   91869 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:15:00.338894   91869 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:15:00.339324   91869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:15:00.339391   91869 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:15:00.354214   91869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
I1212 22:15:00.354739   91869 main.go:141] libmachine: () Calling .GetVersion
I1212 22:15:00.355431   91869 main.go:141] libmachine: Using API Version  1
I1212 22:15:00.355465   91869 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:15:00.355774   91869 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:15:00.355975   91869 main.go:141] libmachine: (functional-136031) Calling .GetState
I1212 22:15:00.357891   91869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:15:00.357932   91869 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:15:00.372915   91869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
I1212 22:15:00.373415   91869 main.go:141] libmachine: () Calling .GetVersion
I1212 22:15:00.373912   91869 main.go:141] libmachine: Using API Version  1
I1212 22:15:00.373939   91869 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:15:00.374251   91869 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:15:00.374465   91869 main.go:141] libmachine: (functional-136031) Calling .DriverName
I1212 22:15:00.374667   91869 ssh_runner.go:195] Run: systemctl --version
I1212 22:15:00.374696   91869 main.go:141] libmachine: (functional-136031) Calling .GetSSHHostname
I1212 22:15:00.377730   91869 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:15:00.378168   91869 main.go:141] libmachine: (functional-136031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:2e:47", ip: ""} in network mk-functional-136031: {Iface:virbr1 ExpiryTime:2023-12-12 23:12:08 +0000 UTC Type:0 Mac:52:54:00:da:2e:47 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:functional-136031 Clientid:01:52:54:00:da:2e:47}
I1212 22:15:00.378203   91869 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined IP address 192.168.50.133 and MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:15:00.378363   91869 main.go:141] libmachine: (functional-136031) Calling .GetSSHPort
I1212 22:15:00.378555   91869 main.go:141] libmachine: (functional-136031) Calling .GetSSHKeyPath
I1212 22:15:00.378719   91869 main.go:141] libmachine: (functional-136031) Calling .GetSSHUsername
I1212 22:15:00.378867   91869 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/functional-136031/id_rsa Username:docker}
I1212 22:15:00.521977   91869 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 22:15:01.385305   91869 main.go:141] libmachine: Making call to close driver server
I1212 22:15:01.385327   91869 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:01.385695   91869 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:01.385715   91869 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 22:15:01.385733   91869 main.go:141] libmachine: Making call to close driver server
I1212 22:15:01.385748   91869 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:01.385989   91869 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:01.386012   91869 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 ssh pgrep buildkitd: exit status 1 (214.669398ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image build -t localhost/my-image:functional-136031 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image build -t localhost/my-image:functional-136031 testdata/build --alsologtostderr: (2.488412968s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-136031 image build -t localhost/my-image:functional-136031 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f089177023a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-136031
--> b8e738aae82
Successfully tagged localhost/my-image:functional-136031
b8e738aae827f81deb7a2847a8a6ccebb32e044e8b3443b28f1b02784901406e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-136031 image build -t localhost/my-image:functional-136031 testdata/build --alsologtostderr:
I1212 22:15:01.441103   91923 out.go:296] Setting OutFile to fd 1 ...
I1212 22:15:01.441444   91923 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:15:01.441458   91923 out.go:309] Setting ErrFile to fd 2...
I1212 22:15:01.441467   91923 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:15:01.441792   91923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
I1212 22:15:01.442666   91923 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:15:01.443957   91923 config.go:182] Loaded profile config "functional-136031": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:15:01.444652   91923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:15:01.444731   91923 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:15:01.461498   91923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
I1212 22:15:01.462057   91923 main.go:141] libmachine: () Calling .GetVersion
I1212 22:15:01.462735   91923 main.go:141] libmachine: Using API Version  1
I1212 22:15:01.462763   91923 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:15:01.463170   91923 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:15:01.463373   91923 main.go:141] libmachine: (functional-136031) Calling .GetState
I1212 22:15:01.465777   91923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1212 22:15:01.465831   91923 main.go:141] libmachine: Launching plugin server for driver kvm2
I1212 22:15:01.481042   91923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
I1212 22:15:01.481467   91923 main.go:141] libmachine: () Calling .GetVersion
I1212 22:15:01.481994   91923 main.go:141] libmachine: Using API Version  1
I1212 22:15:01.482022   91923 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 22:15:01.482411   91923 main.go:141] libmachine: () Calling .GetMachineName
I1212 22:15:01.482631   91923 main.go:141] libmachine: (functional-136031) Calling .DriverName
I1212 22:15:01.482864   91923 ssh_runner.go:195] Run: systemctl --version
I1212 22:15:01.482908   91923 main.go:141] libmachine: (functional-136031) Calling .GetSSHHostname
I1212 22:15:01.485991   91923 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:15:01.486537   91923 main.go:141] libmachine: (functional-136031) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:2e:47", ip: ""} in network mk-functional-136031: {Iface:virbr1 ExpiryTime:2023-12-12 23:12:08 +0000 UTC Type:0 Mac:52:54:00:da:2e:47 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:functional-136031 Clientid:01:52:54:00:da:2e:47}
I1212 22:15:01.486580   91923 main.go:141] libmachine: (functional-136031) DBG | domain functional-136031 has defined IP address 192.168.50.133 and MAC address 52:54:00:da:2e:47 in network mk-functional-136031
I1212 22:15:01.486752   91923 main.go:141] libmachine: (functional-136031) Calling .GetSSHPort
I1212 22:15:01.486914   91923 main.go:141] libmachine: (functional-136031) Calling .GetSSHKeyPath
I1212 22:15:01.487094   91923 main.go:141] libmachine: (functional-136031) Calling .GetSSHUsername
I1212 22:15:01.487251   91923 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/functional-136031/id_rsa Username:docker}
I1212 22:15:01.604317   91923 build_images.go:151] Building image from path: /tmp/build.2973214455.tar
I1212 22:15:01.604390   91923 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 22:15:01.629216   91923 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2973214455.tar
I1212 22:15:01.636202   91923 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2973214455.tar: stat -c "%s %y" /var/lib/minikube/build/build.2973214455.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2973214455.tar': No such file or directory
I1212 22:15:01.636236   91923 ssh_runner.go:362] scp /tmp/build.2973214455.tar --> /var/lib/minikube/build/build.2973214455.tar (3072 bytes)
I1212 22:15:01.742324   91923 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2973214455
I1212 22:15:01.768402   91923 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2973214455 -xf /var/lib/minikube/build/build.2973214455.tar
I1212 22:15:01.805047   91923 crio.go:297] Building image: /var/lib/minikube/build/build.2973214455
I1212 22:15:01.805163   91923 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-136031 /var/lib/minikube/build/build.2973214455 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 22:15:03.827306   91923 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-136031 /var/lib/minikube/build/build.2973214455 --cgroup-manager=cgroupfs: (2.022113417s)
I1212 22:15:03.827367   91923 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2973214455
I1212 22:15:03.838351   91923 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2973214455.tar
I1212 22:15:03.854133   91923 build_images.go:207] Built localhost/my-image:functional-136031 from /tmp/build.2973214455.tar
I1212 22:15:03.854172   91923 build_images.go:123] succeeded building to: functional-136031
I1212 22:15:03.854176   91923 build_images.go:124] failed building to: 
I1212 22:15:03.854225   91923 main.go:141] libmachine: Making call to close driver server
I1212 22:15:03.854245   91923 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:03.854558   91923 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:03.854567   91923 main.go:141] libmachine: (functional-136031) DBG | Closing plugin on server side
I1212 22:15:03.854577   91923 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 22:15:03.854606   91923 main.go:141] libmachine: Making call to close driver server
I1212 22:15:03.854622   91923 main.go:141] libmachine: (functional-136031) Calling .Close
I1212 22:15:03.854882   91923 main.go:141] libmachine: (functional-136031) DBG | Closing plugin on server side
I1212 22:15:03.854919   91923 main.go:141] libmachine: Successfully made call to close driver server
I1212 22:15:03.854946   91923 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls
2023/12/12 22:15:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.046070096s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-136031
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image load --daemon gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image load --daemon gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr: (4.892605685s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (27.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-136031 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-136031 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-n2twz" [bc3615b0-f2c2-4c36-8de9-27b7a0fd634a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-n2twz" [bc3615b0-f2c2-4c36-8de9-27b7a0fd634a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 27.106582384s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (27.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image load --daemon gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image load --daemon gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr: (3.813087126s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-136031
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image load --daemon gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image load --daemon gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr: (9.67113317s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image save gcr.io/google-containers/addon-resizer:functional-136031 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image save gcr.io/google-containers/addon-resizer:functional-136031 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.511488583s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image rm gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.255417821s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 service list -o json
functional_test.go:1493: Took "345.07945ms" to run "out/minikube-linux-amd64 -p functional-136031 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.133:30298
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-136031
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 image save --daemon gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-136031 image save --daemon gcr.io/google-containers/addon-resizer:functional-136031 --alsologtostderr: (1.658476427s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-136031
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.133:30298
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "325.498755ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "78.94114ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdany-port334233740/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702419288857744900" to /tmp/TestFunctionalparallelMountCmdany-port334233740/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702419288857744900" to /tmp/TestFunctionalparallelMountCmdany-port334233740/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702419288857744900" to /tmp/TestFunctionalparallelMountCmdany-port334233740/001/test-1702419288857744900
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.696199ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 22:14 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 22:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 22:14 test-1702419288857744900
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh cat /mount-9p/test-1702419288857744900
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-136031 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [25a8ac87-26ef-4bd9-9524-b5baacd21a95] Pending
helpers_test.go:344: "busybox-mount" [25a8ac87-26ef-4bd9-9524-b5baacd21a95] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [25a8ac87-26ef-4bd9-9524-b5baacd21a95] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [25a8ac87-26ef-4bd9-9524-b5baacd21a95] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.021400605s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-136031 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdany-port334233740/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "236.193514ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "62.802456ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdspecific-port3281643077/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.538279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdspecific-port3281643077/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdspecific-port3281643077/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T" /mount1: exit status 1 (327.909094ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-136031 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-136031 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-136031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2824156039/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-136031
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-136031
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-136031
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (76.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-220067 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1212 22:15:25.172254   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:25.178409   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:25.188689   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:25.209066   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:25.249383   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:25.329763   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:25.490202   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:25.810815   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:26.451847   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:27.732991   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:30.293453   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:35.413738   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:15:45.654011   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:16:06.135004   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-220067 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.89022554s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (76.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-220067 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-220067 addons enable ingress --alsologtostderr -v=5: (12.53262113s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-220067 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (101.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-214035 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1212 22:19:38.285497   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:19:58.766184   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:20:25.172739   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:20:39.726825   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:20:52.856178   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-214035 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.221801552s)
--- PASS: TestJSONOutput/start/Command (101.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-214035 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-214035 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-214035 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-214035 --output=json --user=testUser: (7.106936918s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-440670 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-440670 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.428065ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7312b42-71cc-4ea9-9d83-06a18c248055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-440670] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"71e9ba5a-5605-4193-b6c5-40fda47edc47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17761"}}
	{"specversion":"1.0","id":"d05258ea-57d5-4ee7-ad64-95547cf25424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"35d67478-d3a8-40f9-86b9-e88134fd6b4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig"}}
	{"specversion":"1.0","id":"311a7b39-1533-4a72-b0cc-4f5baf69d754","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube"}}
	{"specversion":"1.0","id":"0d7f3043-b030-472e-af11-9138893ce568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"19fdf655-5228-4c3f-b633-ca9eada858ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"704b0ed8-f777-4535-9825-952548453817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-440670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-440670
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (98.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-773714 --driver=kvm2  --container-runtime=crio
E1212 22:21:39.569451   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:39.574791   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:39.585077   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:39.605391   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:39.645700   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:39.726085   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:39.886542   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:40.207132   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:40.848083   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:42.128609   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:44.690493   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:21:49.811654   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:22:00.052249   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:22:01.648265   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-773714 --driver=kvm2  --container-runtime=crio: (47.301266757s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-776531 --driver=kvm2  --container-runtime=crio
E1212 22:22:20.533470   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-776531 --driver=kvm2  --container-runtime=crio: (48.365205093s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-773714
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-776531
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-776531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-776531
helpers_test.go:175: Cleaning up "first-773714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-773714
--- PASS: TestMinikubeProfile (98.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-165133 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 22:23:01.494484   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-165133 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.335353739s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-165133 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-165133 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-185823 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-185823 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.978921501s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185823 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185823 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-165133 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185823 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185823 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-185823
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-185823: (1.22003386s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-185823
E1212 22:24:17.803456   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-185823: (23.925826504s)
E1212 22:24:23.415420   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (24.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185823 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-185823 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054207 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 22:24:45.489116   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:25:25.171733   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054207 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.812142626s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-054207 -- rollout status deployment/busybox: (2.968467513s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-7fg9p -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-trmtr -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-7fg9p -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-trmtr -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-7fg9p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054207 -- exec busybox-5bc68d56bd-trmtr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-054207 -v 3 --alsologtostderr
E1212 22:26:39.569144   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-054207 -v 3 --alsologtostderr: (43.608595124s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-054207 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status --output json --alsologtostderr
E1212 22:27:07.256113   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp testdata/cp-test.txt multinode-054207:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile707856264/001/cp-test_multinode-054207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207:/home/docker/cp-test.txt multinode-054207-m02:/home/docker/cp-test_multinode-054207_multinode-054207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m02 "sudo cat /home/docker/cp-test_multinode-054207_multinode-054207-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207:/home/docker/cp-test.txt multinode-054207-m03:/home/docker/cp-test_multinode-054207_multinode-054207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m03 "sudo cat /home/docker/cp-test_multinode-054207_multinode-054207-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp testdata/cp-test.txt multinode-054207-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile707856264/001/cp-test_multinode-054207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207-m02:/home/docker/cp-test.txt multinode-054207:/home/docker/cp-test_multinode-054207-m02_multinode-054207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207 "sudo cat /home/docker/cp-test_multinode-054207-m02_multinode-054207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207-m02:/home/docker/cp-test.txt multinode-054207-m03:/home/docker/cp-test_multinode-054207-m02_multinode-054207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m03 "sudo cat /home/docker/cp-test_multinode-054207-m02_multinode-054207-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp testdata/cp-test.txt multinode-054207-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile707856264/001/cp-test_multinode-054207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207-m03:/home/docker/cp-test.txt multinode-054207:/home/docker/cp-test_multinode-054207-m03_multinode-054207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207 "sudo cat /home/docker/cp-test_multinode-054207-m03_multinode-054207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 cp multinode-054207-m03:/home/docker/cp-test.txt multinode-054207-m02:/home/docker/cp-test_multinode-054207-m03_multinode-054207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 ssh -n multinode-054207-m02 "sudo cat /home/docker/cp-test_multinode-054207-m03_multinode-054207-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-054207 node stop m03: (2.101155212s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054207 status: exit status 7 (463.051967ms)

                                                
                                                
-- stdout --
	multinode-054207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-054207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-054207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054207 status --alsologtostderr: exit status 7 (465.25318ms)

                                                
                                                
-- stdout --
	multinode-054207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-054207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-054207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:27:17.709919   99224 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:27:17.710046   99224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:27:17.710058   99224 out.go:309] Setting ErrFile to fd 2...
	I1212 22:27:17.710063   99224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:27:17.710244   99224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 22:27:17.710415   99224 out.go:303] Setting JSON to false
	I1212 22:27:17.710450   99224 mustload.go:65] Loading cluster: multinode-054207
	I1212 22:27:17.710582   99224 notify.go:220] Checking for updates...
	I1212 22:27:17.710872   99224 config.go:182] Loaded profile config "multinode-054207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:27:17.710886   99224 status.go:255] checking status of multinode-054207 ...
	I1212 22:27:17.711234   99224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:27:17.711408   99224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:27:17.729985   99224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I1212 22:27:17.730443   99224 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:27:17.731130   99224 main.go:141] libmachine: Using API Version  1
	I1212 22:27:17.731165   99224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:27:17.731498   99224 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:27:17.731682   99224 main.go:141] libmachine: (multinode-054207) Calling .GetState
	I1212 22:27:17.733299   99224 status.go:330] multinode-054207 host status = "Running" (err=<nil>)
	I1212 22:27:17.733318   99224 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:27:17.733627   99224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:27:17.733663   99224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:27:17.748191   99224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34109
	I1212 22:27:17.748617   99224 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:27:17.749090   99224 main.go:141] libmachine: Using API Version  1
	I1212 22:27:17.749112   99224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:27:17.749429   99224 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:27:17.749631   99224 main.go:141] libmachine: (multinode-054207) Calling .GetIP
	I1212 22:27:17.752602   99224 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:27:17.753067   99224 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:27:17.753117   99224 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:27:17.753236   99224 host.go:66] Checking if "multinode-054207" exists ...
	I1212 22:27:17.753511   99224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:27:17.753547   99224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:27:17.767818   99224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I1212 22:27:17.768256   99224 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:27:17.768674   99224 main.go:141] libmachine: Using API Version  1
	I1212 22:27:17.768695   99224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:27:17.768974   99224 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:27:17.769120   99224 main.go:141] libmachine: (multinode-054207) Calling .DriverName
	I1212 22:27:17.769283   99224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:27:17.769311   99224 main.go:141] libmachine: (multinode-054207) Calling .GetSSHHostname
	I1212 22:27:17.772194   99224 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:27:17.772541   99224 main.go:141] libmachine: (multinode-054207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:bc:d2", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:24:41 +0000 UTC Type:0 Mac:52:54:00:7d:bc:d2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-054207 Clientid:01:52:54:00:7d:bc:d2}
	I1212 22:27:17.772570   99224 main.go:141] libmachine: (multinode-054207) DBG | domain multinode-054207 has defined IP address 192.168.39.172 and MAC address 52:54:00:7d:bc:d2 in network mk-multinode-054207
	I1212 22:27:17.772695   99224 main.go:141] libmachine: (multinode-054207) Calling .GetSSHPort
	I1212 22:27:17.772858   99224 main.go:141] libmachine: (multinode-054207) Calling .GetSSHKeyPath
	I1212 22:27:17.772989   99224 main.go:141] libmachine: (multinode-054207) Calling .GetSSHUsername
	I1212 22:27:17.773162   99224 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207/id_rsa Username:docker}
	I1212 22:27:17.871298   99224 ssh_runner.go:195] Run: systemctl --version
	I1212 22:27:17.877350   99224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:27:17.892084   99224 kubeconfig.go:92] found "multinode-054207" server: "https://192.168.39.172:8443"
	I1212 22:27:17.892117   99224 api_server.go:166] Checking apiserver status ...
	I1212 22:27:17.892154   99224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:27:17.905143   99224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1129/cgroup
	I1212 22:27:17.915071   99224 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod767f78d84df6cc4b5db4cd1537aebe27/crio-9056b250491421594025fa51eddfe421e34e9e41afd867cfc99df08f568a0639"
	I1212 22:27:17.915139   99224 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod767f78d84df6cc4b5db4cd1537aebe27/crio-9056b250491421594025fa51eddfe421e34e9e41afd867cfc99df08f568a0639/freezer.state
	I1212 22:27:17.926154   99224 api_server.go:204] freezer state: "THAWED"
	I1212 22:27:17.926190   99224 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1212 22:27:17.931275   99224 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1212 22:27:17.931300   99224 status.go:421] multinode-054207 apiserver status = Running (err=<nil>)
	I1212 22:27:17.931317   99224 status.go:257] multinode-054207 status: &{Name:multinode-054207 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 22:27:17.931337   99224 status.go:255] checking status of multinode-054207-m02 ...
	I1212 22:27:17.931745   99224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:27:17.931791   99224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:27:17.946949   99224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I1212 22:27:17.947413   99224 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:27:17.947854   99224 main.go:141] libmachine: Using API Version  1
	I1212 22:27:17.947879   99224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:27:17.948226   99224 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:27:17.948407   99224 main.go:141] libmachine: (multinode-054207-m02) Calling .GetState
	I1212 22:27:17.950027   99224 status.go:330] multinode-054207-m02 host status = "Running" (err=<nil>)
	I1212 22:27:17.950049   99224 host.go:66] Checking if "multinode-054207-m02" exists ...
	I1212 22:27:17.950333   99224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:27:17.950373   99224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:27:17.965157   99224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I1212 22:27:17.965536   99224 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:27:17.965995   99224 main.go:141] libmachine: Using API Version  1
	I1212 22:27:17.966017   99224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:27:17.966314   99224 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:27:17.966490   99224 main.go:141] libmachine: (multinode-054207-m02) Calling .GetIP
	I1212 22:27:17.969335   99224 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:27:17.969716   99224 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:27:17.969737   99224 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:27:17.969900   99224 host.go:66] Checking if "multinode-054207-m02" exists ...
	I1212 22:27:17.970222   99224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:27:17.970256   99224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:27:17.984769   99224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I1212 22:27:17.985207   99224 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:27:17.985718   99224 main.go:141] libmachine: Using API Version  1
	I1212 22:27:17.985742   99224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:27:17.986034   99224 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:27:17.986281   99224 main.go:141] libmachine: (multinode-054207-m02) Calling .DriverName
	I1212 22:27:17.986473   99224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:27:17.986493   99224 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHHostname
	I1212 22:27:17.989345   99224 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:27:17.989777   99224 main.go:141] libmachine: (multinode-054207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c3:3d", ip: ""} in network mk-multinode-054207: {Iface:virbr1 ExpiryTime:2023-12-12 23:25:48 +0000 UTC Type:0 Mac:52:54:00:db:c3:3d Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-054207-m02 Clientid:01:52:54:00:db:c3:3d}
	I1212 22:27:17.989833   99224 main.go:141] libmachine: (multinode-054207-m02) DBG | domain multinode-054207-m02 has defined IP address 192.168.39.15 and MAC address 52:54:00:db:c3:3d in network mk-multinode-054207
	I1212 22:27:17.989955   99224 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHPort
	I1212 22:27:17.990140   99224 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHKeyPath
	I1212 22:27:17.990274   99224 main.go:141] libmachine: (multinode-054207-m02) Calling .GetSSHUsername
	I1212 22:27:17.990429   99224 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17761-76611/.minikube/machines/multinode-054207-m02/id_rsa Username:docker}
	I1212 22:27:18.082624   99224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:27:18.095968   99224 status.go:257] multinode-054207-m02 status: &{Name:multinode-054207-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 22:27:18.096013   99224 status.go:255] checking status of multinode-054207-m03 ...
	I1212 22:27:18.096431   99224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 22:27:18.096476   99224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 22:27:18.111188   99224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I1212 22:27:18.111679   99224 main.go:141] libmachine: () Calling .GetVersion
	I1212 22:27:18.112294   99224 main.go:141] libmachine: Using API Version  1
	I1212 22:27:18.112317   99224 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 22:27:18.112672   99224 main.go:141] libmachine: () Calling .GetMachineName
	I1212 22:27:18.112865   99224 main.go:141] libmachine: (multinode-054207-m03) Calling .GetState
	I1212 22:27:18.114440   99224 status.go:330] multinode-054207-m03 host status = "Stopped" (err=<nil>)
	I1212 22:27:18.114455   99224 status.go:343] host is not running, skipping remaining checks
	I1212 22:27:18.114460   99224 status.go:257] multinode-054207-m03 status: &{Name:multinode-054207-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.03s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-054207 node start m03 --alsologtostderr: (29.341071568s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-054207 node delete m03: (1.238449015s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (440.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054207 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 22:44:17.803732   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 22:45:25.171670   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 22:46:39.568629   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:48:28.217425   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054207 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m20.041916848s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054207 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (440.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054207
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054207-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-054207-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.554804ms)

                                                
                                                
-- stdout --
	* [multinode-054207-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-054207-m02' is duplicated with machine name 'multinode-054207-m02' in profile 'multinode-054207'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054207-m03 --driver=kvm2  --container-runtime=crio
E1212 22:49:17.803696   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054207-m03 --driver=kvm2  --container-runtime=crio: (52.487357098s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-054207
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-054207: exit status 80 (241.593712ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-054207
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-054207-m03 already exists in multinode-054207-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-054207-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.69s)

                                                
                                    
x
+
TestScheduledStopUnix (121.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-559035 --memory=2048 --driver=kvm2  --container-runtime=crio
E1212 22:54:42.619549   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 22:55:25.172492   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-559035 --memory=2048 --driver=kvm2  --container-runtime=crio: (50.098517058s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-559035 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-559035 -n scheduled-stop-559035
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-559035 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-559035 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-559035 -n scheduled-stop-559035
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-559035
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-559035 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1212 22:56:39.569361   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-559035
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-559035: exit status 7 (81.024858ms)

                                                
                                                
-- stdout --
	scheduled-stop-559035
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-559035 -n scheduled-stop-559035
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-559035 -n scheduled-stop-559035: exit status 7 (80.967024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-559035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-559035
--- PASS: TestScheduledStopUnix (121.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (162.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-970815 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-970815 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.302071523s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-970815
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-970815: (3.304911048s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-970815 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-970815 status --format={{.Host}}: exit status 7 (94.110292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-970815 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-970815 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.356240302s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-970815 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-970815 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-970815 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (143.324303ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-970815] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-970815
	    minikube start -p kubernetes-upgrade-970815 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9708152 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-970815 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-970815 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-970815 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.666646438s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-970815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-970815
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-970815: (1.055428623s)
--- PASS: TestKubernetesUpgrade (162.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891482 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-891482 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (107.782813ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-891482] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (103.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891482 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-891482 --driver=kvm2  --container-runtime=crio: (1m42.734852747s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-891482 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (103.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891482 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-891482 --no-kubernetes --driver=kvm2  --container-runtime=crio: (5.984552997s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-891482 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-891482 status -o json: exit status 2 (280.454154ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-891482","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-891482
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-891482: (1.309118309s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891482 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-891482 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.815493967s)
--- PASS: TestNoKubernetes/serial/Start (29.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-891482 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-891482 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.504694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.464106949s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-891482
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-891482: (1.82081631s)
--- PASS: TestNoKubernetes/serial/Stop (1.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891482 --driver=kvm2  --container-runtime=crio
E1212 22:59:17.803779   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-891482 --driver=kvm2  --container-runtime=crio: (47.543476891s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-828988 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-828988 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (126.356785ms)

                                                
                                                
-- stdout --
	* [false-828988] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:59:27.878935  109512 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:59:27.879309  109512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:59:27.879321  109512 out.go:309] Setting ErrFile to fd 2...
	I1212 22:59:27.879329  109512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:59:27.879602  109512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-76611/.minikube/bin
	I1212 22:59:27.880468  109512 out.go:303] Setting JSON to false
	I1212 22:59:27.881767  109512 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13322,"bootTime":1702408646,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:59:27.881860  109512 start.go:138] virtualization: kvm guest
	I1212 22:59:27.884311  109512 out.go:177] * [false-828988] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:59:27.885910  109512 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:59:27.885937  109512 notify.go:220] Checking for updates...
	I1212 22:59:27.888680  109512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:59:27.890200  109512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-76611/kubeconfig
	I1212 22:59:27.891426  109512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-76611/.minikube
	I1212 22:59:27.892598  109512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:59:27.893751  109512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:59:27.895592  109512 config.go:182] Loaded profile config "NoKubernetes-891482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1212 22:59:27.895752  109512 config.go:182] Loaded profile config "force-systemd-env-677496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:59:27.895827  109512 config.go:182] Loaded profile config "stopped-upgrade-809686": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1212 22:59:27.895908  109512 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:59:27.935116  109512 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 22:59:27.936331  109512 start.go:298] selected driver: kvm2
	I1212 22:59:27.936358  109512 start.go:902] validating driver "kvm2" against <nil>
	I1212 22:59:27.936378  109512 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:59:27.938580  109512 out.go:177] 
	W1212 22:59:27.939830  109512 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 22:59:27.941120  109512 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-828988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-828988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-828988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-828988"

                                                
                                                
----------------------- debugLogs end: false-828988 [took: 3.720116067s] --------------------------------
helpers_test.go:175: Cleaning up "false-828988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-828988
--- PASS: TestNetworkPlugins/group/false (4.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-891482 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-891482 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.328779ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (109.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-187677 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-187677 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m49.867419425s)
--- PASS: TestPause/serial/Start (109.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-809686
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (129.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m9.569646693s)
--- PASS: TestNetworkPlugins/group/auto/Start (129.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-187677 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-187677 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.343258112s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-187677 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-187677 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-187677 --output=json --layout=cluster: exit status 2 (282.376021ms)

                                                
                                                
-- stdout --
	{"Name":"pause-187677","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-187677","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-187677 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-187677 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-187677 --alsologtostderr -v=5: (1.029403961s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-187677 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-187677 --alsologtostderr -v=5: (1.05648133s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.89s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.890812894s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m10.373610359s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (117.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m57.488213259s)
--- PASS: TestNetworkPlugins/group/calico/Start (117.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-828988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-828988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d9zxx" [d345ecf1-810d-40b7-ada5-0bf2059351d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-d9zxx" [d345ecf1-810d-40b7-ada5-0bf2059351d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.013550997s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (120.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m0.351862593s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (120.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-828988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (138.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m18.112663681s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (138.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sgsgk" [da9edf3f-a4b9-45d6-9305-becbafcc5896] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.025358847s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-828988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (16.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-828988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kg6ld" [e2e01edb-5a85-40f1-8807-c1da799c12f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 23:05:08.217933   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-kg6ld" [e2e01edb-5a85-40f1-8807-c1da799c12f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 16.020879889s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (16.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-828988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.920047498s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hgbtd" [660182aa-fd0f-492f-aebc-012245043922] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.040280583s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-828988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-828988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bllqc" [96a5deaa-db98-4e95-b7de-0c93b2e84cbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bllqc" [96a5deaa-db98-4e95-b7de-0c93b2e84cbb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.020991085s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-828988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-828988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-828988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-27p7v" [6ca5dede-0d68-43db-93c2-2084439a39c8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-27p7v" [6ca5dede-0d68-43db-93c2-2084439a39c8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 16.015200214s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-828988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (103.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1212 23:06:39.569416   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-828988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m43.448566673s)
--- PASS: TestNetworkPlugins/group/bridge/Start (103.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-549640 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-549640 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m26.145067993s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-828988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-828988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-55kbd" [2c904904-7cd9-4efd-b827-e181e874b4c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-55kbd" [2c904904-7cd9-4efd-b827-e181e874b4c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.015123631s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-828988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pjq7z" [0ee8ff9f-7337-4936-bfab-0a2251b48fd0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.027362578s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-828988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-828988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jlgcz" [49581cef-e00e-40ae-a461-1dfc7c4b645e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jlgcz" [49581cef-e00e-40ae-a461-1dfc7c4b645e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.01938837s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (133.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-115023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-115023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m13.277940646s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (133.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-828988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-809120 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-809120 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m14.328380449s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-828988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-828988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ggpt6" [61b28435-2355-4c6d-a97d-66c986c44cb3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ggpt6" [61b28435-2355-4c6d-a97d-66c986c44cb3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.012636149s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-828988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-828988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-850839 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1212 23:09:00.851409   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-850839 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m4.909854577s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-809120 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3b009e45-210e-439c-ad24-043cb2ae4f7b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 23:09:01.565287   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:01.570603   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:01.580935   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:01.601283   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:01.641583   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:01.721995   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:01.882586   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:02.203049   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:02.843719   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:09:04.124490   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3b009e45-210e-439c-ad24-043cb2ae4f7b] Running
E1212 23:09:06.684726   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.031915708s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-809120 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-809120 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-809120 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.197223608s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-809120 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-549640 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b53af585-9754-4561-8e28-c04e2d0d07d1] Pending
helpers_test.go:344: "busybox" [b53af585-9754-4561-8e28-c04e2d0d07d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 23:09:11.804940   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
helpers_test.go:344: "busybox" [b53af585-9754-4561-8e28-c04e2d0d07d1] Running
E1212 23:09:17.803187   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.034010797s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-549640 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-549640 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-549640 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-115023 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c3e27dce-5379-401e-8f19-a448e0f4d4b2] Pending
helpers_test.go:344: "busybox" [c3e27dce-5379-401e-8f19-a448e0f4d4b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c3e27dce-5379-401e-8f19-a448e0f4d4b2] Running
E1212 23:09:42.525417   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.026813559s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-115023 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-115023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-115023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103908714s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-115023 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-850839 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2a7a232d-7be4-46ec-9442-550e77e1037a] Pending
helpers_test.go:344: "busybox" [2a7a232d-7be4-46ec-9442-550e77e1037a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2a7a232d-7be4-46ec-9442-550e77e1037a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.037911459s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-850839 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-850839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-850839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103995024s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-850839 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (695.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-809120 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-809120 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (11m35.07643027s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-809120 -n embed-certs-809120
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (695.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (364.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-549640 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1212 23:11:53.068241   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:53.073554   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:53.083863   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:53.104156   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:53.144529   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:53.224960   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:53.385470   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:53.706085   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:54.346335   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:55.626606   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:11:58.186855   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-549640 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (6m4.33380138s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-549640 -n old-k8s-version-549640
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (364.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (651.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-115023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1212 23:12:19.859160   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-115023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m50.81281954s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115023 -n no-preload-115023
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (651.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (589.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-850839 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1212 23:12:32.052443   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:12:34.029257   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:12:46.048123   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:12:50.580594   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:13:13.362391   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:13.367675   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:13.377979   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:13.398293   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:13.438605   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:13.518996   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:13.679446   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:14.000128   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:14.641193   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:14.989656   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:13:15.921922   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:18.482251   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:23.603074   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:31.541614   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:13:33.843391   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:13:35.615455   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:13:53.973449   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:13:54.323992   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:14:01.565707   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:14:17.803744   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:14:29.247085   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:14:35.284634   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:14:36.910345   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:14:53.462061   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:15:02.203477   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:15:25.172175   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 23:15:29.888571   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:15:51.771552   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:15:57.205154   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:16:10.129072   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:16:19.456677   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:16:37.814654   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:16:39.568754   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:16:53.068138   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:17:09.617462   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:17:20.751491   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:17:37.302335   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-850839 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m48.909069784s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850839 -n default-k8s-diff-port-850839
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (589.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-549640 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-549640 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-549640 -n old-k8s-version-549640
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-549640 -n old-k8s-version-549640: exit status 2 (271.729548ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-549640 -n old-k8s-version-549640
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-549640 -n old-k8s-version-549640: exit status 2 (279.852977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-549640 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-549640 -n old-k8s-version-549640
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-549640 -n old-k8s-version-549640
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-439645 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1212 23:36:10.129250   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:36:39.569361   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:36:53.067407   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-439645 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (59.118973805s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-439645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-439645 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.55152205s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (331.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-439645 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1212 23:39:47.126957   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:48.259544   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:48.264841   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:48.275123   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:48.295813   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:48.336158   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:48.416595   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:48.577146   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:48.897792   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:49.538783   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:50.819105   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:52.343741   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:39:53.379576   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:39:57.368181   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:39:58.500622   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:40:02.203897   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:40:08.741890   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:40:17.848402   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:40:25.172482   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/addons-361656/client.crt: no such file or directory
E1212 23:40:29.222286   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:40:33.304465   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:40:51.771176   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:40:58.809040   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:41:10.129080   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:41:10.183406   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:41:39.569529   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:41:53.067450   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:41:55.225146   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:42:04.609319   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:42:09.617207   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
E1212 23:42:20.730274   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:42:20.852626   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:42:32.104138   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:43:05.249838   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:43:13.361724   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/bridge-828988/client.crt: no such file or directory
E1212 23:43:54.818938   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/calico-828988/client.crt: no such file or directory
E1212 23:44:01.565495   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/auto-828988/client.crt: no such file or directory
E1212 23:44:11.381012   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:44:13.176028   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/custom-flannel-828988/client.crt: no such file or directory
E1212 23:44:17.803404   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/functional-136031/client.crt: no such file or directory
E1212 23:44:36.886355   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:44:39.066087   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/old-k8s-version-549640/client.crt: no such file or directory
E1212 23:44:42.621449   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/ingress-addon-legacy-220067/client.crt: no such file or directory
E1212 23:44:48.258682   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
E1212 23:44:56.112491   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/enable-default-cni-828988/client.crt: no such file or directory
E1212 23:45:02.203197   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/kindnet-828988/client.crt: no such file or directory
E1212 23:45:04.571197   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/no-preload-115023/client.crt: no such file or directory
E1212 23:45:12.663815   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/flannel-828988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-439645 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m30.715174193s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-439645 -n newest-cni-439645
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (331.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-439645 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-439645 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-439645 -n newest-cni-439645
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-439645 -n newest-cni-439645: exit status 2 (257.859936ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-439645 -n newest-cni-439645
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-439645 -n newest-cni-439645: exit status 2 (247.788584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-439645 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-439645 -n newest-cni-439645
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-439645 -n newest-cni-439645
E1212 23:45:15.944687   83825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-76611/.minikube/profiles/default-k8s-diff-port-850839/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.60s)

                                                
                                    

Test skip (39/307)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
226 TestChangeNoneUser 0
229 TestScheduledStopWindows 0
231 TestSkaffold 0
233 TestInsufficientStorage 0
237 TestMissingContainerUpgrade 0
251 TestNetworkPlugins/group/kubenet 3.29
259 TestNetworkPlugins/group/cilium 7.03
266 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-828988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-828988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-828988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-828988"

                                                
                                                
----------------------- debugLogs end: kubenet-828988 [took: 3.125678541s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-828988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-828988
--- SKIP: TestNetworkPlugins/group/kubenet (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-828988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-828988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-828988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-828988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-828988"

                                                
                                                
----------------------- debugLogs end: cilium-828988 [took: 6.87703475s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-828988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-828988
--- SKIP: TestNetworkPlugins/group/cilium (7.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-685244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-685244
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard